This past week, I offered to prepare a short advisory document to an academic organization that was planning to increase its web presence. I think that academic organizations do well to model their sites and the people who are asked to maintain them along the lines of established academic institutions and develop “officers”, missions statements, and policies. I think that we should also follow the basic academic method of being collaborative and deliberate, results will be better as well. Even a single author blog is in some way collaborative as it relies on colleagues and collaborators to link to or to twitter posts. Being deliberate is deeply ingrained in the most conservative traditions of academic life.
With some slight modifications to protect the innocent, here it is:
1. Audience. The most important thing about any website it to have a clear idea of an audience. For example, my Archaeological of the Mediterranean World site appeals generally to academics interested in Mediterranean archaeology, ancient and Byzantine history, and technology. So while most of the content (see below) on my site counts as a kind of “mindcasting”, I do try to mindcast on things of interest to a notional audience.
2. “Content is King”. For a website to “work” people have to work it into their everyday life. To do this, the site needs to be updated regularly (at least weekly) with new content so people want to come back and check it out. The best way to keep a site updated regularly is to develop a group of dedicated contributors. The era of the static website full of “resources” is over.
3. Contributors. If the website is going to thrive it has to have some regularly updated content. This does not have to be daily, but it needs in some way to be regular. To maintain a regular flow of content, you need to have multiple contributors. A good editor can drum up contributors and provide content when needed, but it is essential to have a core group of people willing to work to produce significant web content. (I think that there is a small, but rather a committed community already producing good quality content for the web, and we should be able to leverage this community). My general feeling is that no section of the website will remain up-to-date and interesting without at least a few contributors. Moreover, having a few contributors will prevent a section of the site from becoming a single editors soapbox.
4. An Editor. The best websites have an editor or a group of designated editors who are responsible for content in particular areas of the site. The editors responsibilities might include soliciting new content, maintaining basic information on their section of the site, and establishing policies. Also naming some an “editor” confers a certain amount of academic and intellectual prestige to these positions (and makes it easier for a mid-career faculty member to claim this work as part of “national service” or whatever.). We might also consider bringing in, say, one or two other editors (a “Blog Editor,” perhaps, or even a “Features Editor”). The advantage of giving these individuals real editorial control over their sections is that they can be gatekeepers for the content coming onto the web, ensure its quality, maintain the content, publicize the content, et c. Moreover, multiple contributors are also more likely to invoke some positive discussion.
5. Mission statement. Since this will be something of an official site, we should probably come up with some kind of simple, broad mission statement that will help us create policies for the kind of material that we include on our site. For example, do we intend the site to be a scholarly resource or do we want to try to cater to a academic interests? Or do we want to do both. In any event, a mission statement will help us think about our audience and the types of things that we value.
6. Policies. I know that this will seem overwrought, but as someone with a public web presence, I have been overwhelmed by a range of strange propositions that I get to feature material on my little blog. Having a policy of what kinds of material you will or won’t allow will make the editors’ jobs much easier. For example, will you let people post advertisements for their book on the site? Will we let people submit job ads? Will we advertise summer programs? You can imagine.
7. Design. The nicest website sites have some common design elements. If the plan is to use an institutional server (rather than a commercial service) to host the site as the central hub for a web site that would then would push traffic to various externally hosted pages, then it would be great to have some kind of common design for these external pages (and include cues on the Princeton page).
8. Software. Blogs are great. This is not just because I am a blogger, but the ease of updating a blog makes them great for regularly updated content. Moreover, many of the good blog services (e.g. wordpress.com hosts WordPress software on their servers) or software (e.g. WordPress is free to download and relatively easy to set up on an institution’s servers) allow you to create static pages as well as blog pages. They are also equipped with an RSS feed et c. making them really easy to update and edit by people with almost no technical knowledge.
9. Social Media. If we are serious about developing a web presence for our organization we need to consider having an integrated social media component. Social media sites like Twitter and Facebook work well to connect potential readers to the web site and serve as a key method for pushing content to a wider audience. In general, social media services are fairly easy to maintain and manage. That being said, like the website itself, content drives traffic. If we don’t maintain social media, then we won’t reap its benefits.
10. Take our time. One thing I’ve seen other places do is to rush out a web presence before they have developed content, policies, or even a kind of editorial or institutional support. The results have been pretty dodgy and have not held up well. Taking time to develop how a website will work and who will be responsible for what parts of the site will produce the best quality results.
Brief Review of CLIR and Tufts: Rome Wasn’t Digitized in a Day: Building a Cyberinfrastructure for Digital Classics
This weekend, I finally made it through the most recent report on cyberinfrastructure and digital Classics. As the title of this post indicates, it was produced by the Council on Library and Information Resources and Tufts University, a longtime leader in the field of digital Classics. The report is massive, running to over 250 pages, and gives a feeling of exhaustiveness. The bulk of the report consists of a series of case-studies organized into the various allied- and sub-disiplines of Classics (Philology, Archaeology, Papyrology, Epigraphy, Prosopography, et c.). For most case-studies there is abundant technical detail as well as some information on the guiding principals of the project, intended end-users, funding sources, and institutional affiliation. There is a pronounced emphasis on the core area of Classics and the analysis of texts of various kinds (inscribed, on papyrus, in edition, et c.), and with this emphasis on texts comes a corresponding emphasis on mark-up technology, collaborative editing, and various image-to-text initiatives like Greek and Latin OCR. The report’s scope, detail, organization and bibliography make it a must read for anyone interested in the work of digital humanities, digital Classics, or the future of the discipline Classics. It is the type of report that any graduate student going on the job market should at least skim to become familiar with the basic terms, programs, and projects in the field of digital Classics.
While I am hardly qualified to comment on the content of the report, a few things struck me as worth pointing out:
1. New models of collaboration for new kinds of texts. The most exciting thing about this report are the new perspectives on scholarly collaboration. At the center of these new perspectives are a set of new tools and collaborative environments which are designed to produce new kinds of texts. In general, these texts are dynamic, multilayered, and designed to take into account both the work of numerous contributors. The next generation of scholarly editions, for example, will be increasingly transparent allow the end user to understand the processes that produced certain editorial decisions and, if necessary, filter the various editorial decisions to produce new versions of a text in keeping with new analytical, interpretative, or methodological positions. The same collaborative environment extends to epigraphy, papyrology, and even archaeology (in some way) where scholars have developed ways to work together to pool resources from around the world and to create new groups of texts. These new collections of texts are born digital, making specialized bodies of material (like epigraphical and papyrological corpora) more widely available, and more susceptible to re-analysis and re-interpretation. The scalability of digital technology allows multiple scholars, a wide-range range of end-users, and diverse digital objects (texts, images, and interpretative methods) to all exist in the same place at the same time. These are new, transparent, and productive scholarly environments.
2. Human infrastructure. There is no doubt that the projects described in this report are exciting, but I felt that the report took the notion of cyber-infrastructure a bit too literally at times. In places the projects described by the CRIL and Tufts teams stood strangely disembodied from larger social, institutional, and professional pressures and incentives. While the report made an obligatory mention of studies of scholarly collaboration, professional pressures, and potential end-users, I was not as easily able to grasp the creative environments from which these innovative programs sprung. In particular, I struggled to identify the research questions or, more broadly, the scholarly discourse that inspired these new approaches to age old problems. I recognize, of course, that large-scale digital initiatives often take into account a wide range of initiatives, research questions, and stake holders, but at the same time, scholarly collaborative while sometimes altruistic, rarely exists without some common research objectives. Moreover, these research objectives must exist in an environment where administrators, technical staff, and colleagues have the interests and the resources to promote and encourage innovation. The human infrastructure necessary to support cyber-infrastructure projects, to my mind, is far more crucial to their long-term health than the relatively ephemeral character of technical detail. And this human infrastructure extends to how we teach students and the nature of academic and scholarly expectations. With more dynamic and robust tool available, it is curious that the willingness to avail oneself to these tools remains, to some extent, optional within the academic discourse. In other words, the eventual success of a digital infrastructure project will depend on the willingness of an editor, a peer reviewer, or a conference panel to expect a scholar to use a particular corpus of material. The human infrastructure, then, represents a dense and complex web of knowledge, traditional practices, and support infrastructure that, to my mind, is far more important than the tools and vision at the root of a cyberinfrastructure project.
3. The Social and New Media. Another slight oversight in this comprehensive report is the absence of any real discussion of the role of the public backchannel in Classics cyberinfrastructure. By digital backchannel I mean both blogs and the growing role of social media in stimulating discussion among scholars of the ancient world on topics both digital and traditional. I am not one of those people who think that blogs are the new academic journals or who even press for new media spaces to carry substantial weight in tenure, promotion, or professional development decisions. On the other hand, I have argued that blogs occupy a novel and useful place in the expanding digital information ecosystem of Classics. And bloggers and their blogs, like many other larger, more integrative digital infrastructure projects, have not come to terms with the tricky task of curating and preserving the huge quantity of analysis, discussion, and even knowledge produced through these new media. With the growth of Twitter, Facebook, and other even more ephemeral social media portals the issue of curation has become even more tricky. If we imagine social and new media applications as playing a role in our digital future as scholars, then these outlets have to become part of the conversation of the digital future of the discipline.
4. Mobile Futures. Finally, I was surprised that mobil computing did not occupy a more significant place in this report. If I understand the global trends in computing, the future is in mobile devices and applications. In fact, I read the report on my iPad. I do realize, of course, that some of the mobile computing “revolution” will involve us just doing on a mobile device what we’ve always done on a laptop or a desktop, but there is also a trend toward re-imagining how we work and how we disseminate data over mobile devices. As we look ahead, it seems clear to me that mobile devices, the cloud, and even greater degrees of integration and communication will produce new challenges for curation and new opportunities of realtime collaboration.
As I said at the top, this report is a roadmap for anyone interested in the state-of-the-art in digital Classics and presents a brilliant case study for the impact of humanities computing in one field. Any gaps or oversights, are incidental and tied more to the goals of the project than any shortcomings of the authors.
Crossposted to Teaching Thursday
This week the Senate Continuing Education Committee hosted its regular Online Teaching Showcase. Each semester the showcase brings together faculty who teach online and asks them to share some the techniques and technologies that they use to make their online classes more successful. In some ways, this regular gathering of online teaching faculty is a great way to get a sense for future directions in online teaching.
Many of the most common (and intriguing) applications that faculty used to reach their online and distant students sought to facilitate realtime interaction between faculty and student. The old stalwarts, Adobe Connect and the various Wimba Applications (which are conveniently bundled into Blackboard), made an appearance. Their reliable and familiar interfaces allow faculty to stream a lecture to a group of students in real time, record the lecture for an archive, and share screens with students. Tegrity Lecture Capture joined these two applications as another option for faculty who are interested recording lectures live. Tegrity is a server (or as they say now “cloud”) based application that allows students to view lectures either in real time or recorded without downloading software to their computer. To watch a recorded lecture, the student downloads a relatively small executable file which they then run on their computer. Based on the demonstration that I saw at the Showcase, Tegrity allows for the faculty member to track students who stream the lectures from the cloud. Faculty could not only see how long a student viewed a recorded lecture, but also isolate parts of the lecture that a student re-watched in order to identify problem concepts or explanations.
I also saw a demonstration of Tidebreak which is an application that creates a dynamic, shared environment where students and faculty can share screens, swap files, and even take control of a central, shared workstation to demonstrate a procedure or execute a task. I could imagine that software like Tidebreak could be used alongside Adobe Connect or Wemba to create a far more interactive online classroom, but with this advance comes greater complexity.
Cloud based computing also was on display with products like Citrix. Citrix allows students to access applications run “in the cloud”. The applications range from Adobe products like Photoshop to the standard suite of Microsoft offerings (Excel, Word, Access) and even more specialized applications like the statistics application SPSS. From what I can tell, the goal of this kind of service is allow students access to software without the expense and complications individual licensing. It will eventually allow a faculty member to create an online computer lab where they could work with a group of students using virtualized software (again, from the cloud) without making them each buy the applications or worrying about the hardware that remote students are running.
The applicability of these new applications and services is immediately apparent to the part of me that wants to create a richer, more dynamic online classroom. Another part of me observes that the complexity of these applications will certainly increase the learning curve for a student engaging in online learning (even while services like Tegrity and Citrix could lower the point of entry from the stand point of hardware and software). Much of the collaborative technology on display also privileged a live teaching environment. Most of my online teaching, however, and I imagine this is true for many faculty members, is done asynchronously. That is to say, we are not interacting with students live; instead students are viewing course material at their own pace and interacting with the instructor or their fellow students at far less regular interval than they would in a classroom environment. While I am sure the users of each of these technologies would stress that they could also work asynchronously, it still seemed clear to me that the goal was to reproduce the classroom experience in a virtual or online way, rather than to imagine the online classroom as something fundamentally different.
As you might imagine, I am pretty excited that Steven Ellis’s team’s use of the iPad as their primary,field data recording device is getting some attention lately. I imagined this kind of digital workflow when I began working with Scott Moore to design the digital recording components of our project in Cyprus. Scott and I, from what I recall, always assumed a paper stage. This is what that stage looks like now:
I think that we fell back on the old archaeological wisdom that a paper stage somehow serves as a more dependable back up that digital copies. This led us to copying the entire archive each year and carrying it home (and still managing sometimes to lose copies of the original or not have them where we needed them). With a fully digital workflow, it is, of course, much easier to make copies of every stage of the documentation process and store them multiple places, and, provided that a good version control system is in place, manage these copies.
I know that I also subscribed to the idea that paper copies preserve more fully the archaeological thought process. We insisted that our trench supervisors not keep separate, personal, notebooks (they did anyway) and write directly onto our recording sheets as they excavate. The hope was that the image of the stratigraphic unit form provided the best record of the process of excavation. In fact, as much as was possible, we have sought to associate digital images of these sheets (and the trench plans of each stratigraphic unit) with the digital copies of this data. This remains a time consuming process of keying the data from each sheet and digitizing each days trench plans. Having supervised the keying of most of our field data, I can attest to the hours of time and concentration that went into producing our digital versions. It’s mostly done now, but it was a onerous process and we haven’t quite produced data with the kind of immediate transparency that we had hoped for (although it is all still possible). Using the iPad to record directly into digital form the basic data from the trench would pay immediate dividends by streamlining the data collection process.
On the other hand, I do wonder whether some of the data associated with the archaeological process might be lost. I was thinking about the faint evidence for revision that appears on our paper recording sheets – typically under various forms of erasure (usually a
strikethrough) – that preserves irregular fragments of the archaeological through processes. If Wikipedia has taught us anything, digital recording makes it possible to record this same data by recording each change to the data set and each earlier version. In effect, the digital data collection could preserve a kind of digital palimpsest of each key stroke, deletion, adjustment, mistaken measurement.
I am fascinated by this kind of micro-history and its potential to reveal patterns of behavior across an entire project and capture a more intimate look at how the archaeological method is performed.
Just for fun, I used The Archivist to capture some of the buzz about the Apple story on Ellis’s use of the iPad. The Archivist lets you download all the Tweets associated with any search criteria. For my little experiment, I captured all the Tweets that used the word “Pompeii” and “iPad”. As of 6 am this morning when I staggered into my office, I captured 520+ Tweets. I then plotted them by hour over the last few days. Here’s the chart.
They have averaged about 5 tweets an hour over the last 100 hours or so. The peek was 95 tweets per hour between 12:20 pm and 12:20 pm on September 23rd. Thus surge continued over the next hour where they had over 80 tweets and subsided to under 40 tweets later by 3:30 or so. The great thing about The Archivist is that it lets you download your Tweets so that you can data mine them using an application like RapidMiner. I didn’t do that, but I did do some simple mining. For example, Ellis’s name is mentioned in 131 of the tweets (or about 25% of the time) and about 16% of the Tweets are obvious “RT-style” re-tweets. In Tweets with both Pompeii and iPad in them Ellis’s university, University of Cincinnati, was never once mentioned nor was his project’s name, the Porta Stabia project (even in two Tweets that appear to come from “official” University of Cincinnati channels!). In the hyper economical world of Twitter, there are good reasons not to include long word like Cincinnati or relatively obscure project names. In contrast, the most common phrases is “Discovering ancient Pompeii with iPad” which was the title of the Apple article and it appeared in 62% of the Tweets (suggesting the far larger number of retweets happen than had the traditional “RT” designation). For the record, my Tweet, which occurred very early in the Tweet cycle led to only three retweets.
This is the kind of micro-historical analysis that could be possible by mining the minutia preserved in a fully digital workflow.
By the way, it’s a double blog day! I thought that I needed to do something to mark my 800th post and in the tradition of the National Register of Historic Places, I thought I’d just put up a marker (with a few links, it is a blog after all).
I downloaded onto my iPad – via the Kindle application – a copy of Clay Shirky’s Congnitive Surplus (New York 2010). This book has receive a good bit of attention on the interwebs, in large part because Shirky is unapologetic about the potential of the internet and particularly the potential of the internet for good. In an era where one’s status as a pundit almost depends upon a certain cynical view of the world, this book is refreshing and positive.
In short, Shirky argues that the internet provides an outlet for surplus energy that the prosperity of the second half of the 20th century has made available to us. The rise in prosperity has allowed residents of the West, in particular, to enjoy increasing amounts of free-time and leisure. Shirky contends that the number one use of this leisure time over the last 60 years has been watching television. Watching television is solitary, somewhat anti-social, and, most importantly, passive.
The rise of the internet has begun to slowly encroach on the dominance of television. Unlike TV the internet is social, provides a platform for both passive consumption and active production of media, and encourages the formation of communities with shared interests. The dynamic character of the web as a social platform functions to channel energies previously locked away in in the passive relationship between the individual and the television. The web has already begun to channel the “cognitive surplus” unleashed by the West’s recent prosperity, but hitherto squandered through passive and more or less solitary leisure-time activities. Shirky’s best example of this is Wikipedia which appeared out of the many moments of leisure enjoyed by tens of thousands of individual contributors. The result is a testimony to the aggregate knowledge of global community of individuals which prior to the internet would have found a singular, intellectually substantial expression.
While this is cool thesis, it also caused me to think about a few things:
1. I am not convinced that the “cognitive” activity that Shirky associates with the internet comes directly from surplus time spent in front of the television. It’s a great idea, but a relatively unsophisticated argument. First, people always used some of their free time in productive, social ways. Whether it is membership in a community organization, work with a church or other religious group, or serving as an elected official or a volunteer, the cognitive surplus created by economic prosperity poured innumerable areas of social and community life. As the internet allows for communities to extend beyond the institutional and social confines of traditional, place-based communities, surely some of Shirky’s apparent “cognitive surplus” comes at the expense of these other, more traditional forms of community and social organization. At the same time, there are those who suggest that the rather diffuse creativity on display on the internet comes at the expense of more economically productive pursuits. The individuals who produce LOLCats for example might otherwise be watching television, but also might be reading a book, working, learning or refining a skill. I am all for these profoundly democratic expressions of creativity, but I’d be reluctant to argue that television and the internet form a kind of zero-sum dyad. The arguments for the evils of the internet, in fact, tend not to be arguments for the watching of television, but rather arguments that the internet undermines more rigorous, local, focused, and ultimately socially responsible uses of time and talent. Shirky does little to undermine these critiques.
2. The notion of channeling surplus is always appealing, but what really matters is how that surplus (cognitive or otherwise) is channelled. The downside of the unfettered and limitless nature of the internet is that it can minimize the impact of a small contribution while still giving the individual the sense of contributing to something larger. (And I say this a blogger who regularly devotes 4 or 5 hours a week launching my two-cents into the void, and with the understanding that these 4 or 5 hours could be spent polishing up a lecture, reading another, important, argument, reading a graduate student’s paper just that much more carefully, or any number of professionally and socially responsible (impactful) activities). The radically democratized space of the internet is the most efficient venue for all forms of surplus. The “eat local” movement provides a nice model here. Just eating locally produced foods is not a sure-fire solution to ecological, economic, and ethical problems facing large scale food production in a globalized economy. In the same way, the shear scale of the internet presents significant problems for the efficient use of specialized surplus.
3. Finally, this is the first book that I’ve read cover-to-cover (so to speak) on my iPad. The most interesting aspect of this experience (aside from the fact that the iPad is a very nice tool for reading a book) is that I could where other people highlighted passages in Shirky’s book. Slight, dashed underlines showed me commonly annotated passages and clicking on the passages indicated how many people underlined that particular text. Here is a great example of Shirky’s of how the internet takes the solitary act of reading and annotating a text and turns it into a global activity with numerous participants creating a running commentary. While at present (as far as I can tell) the Kindle application only allows readers to share underlining, it would be remarkable in the future for readers to share margin notes, comments, and even links to other passages in other books. The aggregate of these activities would instantly turn any book into a critical edition.
I am willing to try almost any piece of technology at least once if I think that it has the potential to improve the way that I teach, write, or do research. The investment in time required to learn a new piece of software or gizmo while often unsatisfactory one an individual level, has so far paid dividends across the whole range of technologies that I use to manage my everyday life. To put it another way, I was very reluctant to learn to use the so-called e-mail, but the initial investment in learning Eudora (many years ago) has added a level of efficiency to my everyday life that more than makes up for the time wasted trying to learn to use the latest gizmo or application.
Over the past six months, I’ve used and appreciated a whole range of new technologies, ranging from my iPad and my Android powered phone to light duty web-aps that solve an immediate problem (how is it possible to schedule a meeting without Doodle?). From that little gaggle of software and hardware, three piece of intriguing technology stand out:
1. Omeka.net. I am really excited to be an alpha test for Omeka.net. Omeka is an online collection management software produced by the Center for History and the New Media at George Mason Universityand our neighbors at the Minnesota State Historical Society. It allows an individual or organization to organize and present collections of material – from texts and podcasts to images and video. As someone who views the world as a kind of infinite archive, a program of this kind has obvious appeal. For the last year, I’ve had Omeka running on a server at the University of North Dakota and it has become home for various collections of images including a fine art photography exhibition, a research archive of vernacular architecture in Greece, and a small collection of maps from my survey project in Greece.
The only downside to the program was that it took me quite some time (and a bit of money) to get it up and running on a University server. Omeka.net eliminates the hassle of running and maintaining server based software because they offer both the software and the server side maintenance in the same way that WordPress.com hosts WordPress blogs. This means that soon, even the least technologically inclined could be up and running with Omeka and begin to catalogue their personal or group archives.
The potential for teaching is really clear. Curation is becoming an important watchword in our digital age as people come to realize that the quantity of data produced has come to challenge our ability to manage it. The ability to deploy and teach easily a powerful tool like Omeka for collecting, organizing, and presenting a wide range of digital material (primarily in the humanities, but Omeka is hardly a tool limited to a particular discipline) will introduce information management and literacy skills that are likely to be relevant for our digital age.
Right now, Omeka.net is out in invitation only Alpha testing with all the attended caveats, but I asked for an invitation and received it within a few months.
2. Ecto vs. MarsEdit. This past week, ProfHacker (a must read for tech-curious faculty) discussed briefly the relative merits of two offline, blog composition tools, Ecto and MarsEdit. If you’re a blogger (and these days, who isn’t), it is almost essential to be able to write a blog post someplace other than the online space provided by your blog provider. In general, the online editors provided by most blogging services (e.g. Typepad, WordPress, Blogger) are underpowered, a bit fickle, and dependent on your connection to the internet (and stability of your browser) to work. There is nothing more frustrating than composing a brilliant post online and seeing it vanish with a browser crash or internet interruption. Offline composers are half light-duty word processors and half light-duty html editors. The best option is probably Windows Live Writer, but there is no Mac version of this flexible and stable little program. The two best for Mac users are Ecto and MarsEdit. Both provide a word processor type interface that allows you to compose easily, edit HTML, and to integrate various media content.
I used Ecto for over a year and found it pretty satisfactory. It did a particularly nice job managing links (and a blog is nothing without its links to other blogs and sites on the web) and images. MarsEdit has a slightly nicer interface for writing, however. I love that I can change the font that I am writing with in MarsEdit without changing the font that appears on my blog. In other words, I indulge my idiosyncratic preference to compose in American Typewriter font without having to publish using that font. MarsEdit may be a bit less capable of handling images, however.
Either tool makes blog writing less of an adventure and more of a pleasure. The simple interfaces encourages a focus on the words (not dissimilar from the recent spate of simplified word processors likeWriteRoom) and the stability and security the software encourages me to write in a longer form than I might do on the web.
3. Daytum. Daytum is one of the quirkier services on the web. It provides a subscriber with an interface where they can record and quantify things. For example, I count the number of words that I write each day (since I started using Daytum, I’ve written 73,810 words). I also record whether I get a ride home with my wife or walk; to date, I’ve walked home 35 times and got a ride home 34 times since January. I like recording the temperature in my office in the morning, but I’m just like that. I also like the idea of keeping track of how many pages I read each day, but I’ve found that more of an inconvenience as I move from reading paper books and articles to reading across a wide range of media, many of which do not use pages at all (e.g. the web, on my iPad, et c.).
Daytum is a free indulgence for those obsessed with quantifying their lives. At the same time, it represents the far fringe of a whole batch of software designed to help one become more efficient or at least more aware of how one spends their time. As academics, it seems like we are always running out of time, stumbling across some new deadline, or having to negotiate some kind of delicate work management solution to balance relationships, teaching, research, or “outside” interests.
There two curious articles published yesterday. One was about Powerpoint (or as I call it The Powerpointer) in the New York Times (and picked up by the Chronicle of Higher Education‘s Brainstorm blog). The prevalence of Powerpoint in military briefings has apparently reached epidemic levels and many folks within the military are saying that the reliance on Powerpoint to communicate information not only makes the seemingly endless stream of briefings debilitatingly boring, but also might impair the ability to make good decisions. In fact, one military official argued that Powerpoint is responsible for creating “the illusion of understanding and illusion of control” in the U.S. Military. Let’s hope that this is hyperbole. What is clear, however, is that creating, presenting, and enduring Powerpoint shows takes a tremendous amount of time, and a significant part of that time is spent dealing (in both good and bad ways) with Powerpoint rather than dealing with content of the Powerpoint presentation. This would seem to be a perfect example of technology having agency; Powerpoint creates a culture that depends upon the use of Powerpoint for its daily work, basic communication patterns, and ultimately its decisions making.
The other article appeared at the blog academHacK and questioned the value of the iPad in higher education. David Parry argued that Apple’s practice of censoring apps that do not coincide with rather ambiguous and strictly enforced views on propriety offers a serious threat to the utility of the iPad in the context of Higher Education. In large part, Parry’s argument was focused on the possibility that Apple would censor textbooks that appear as apps on the device. This might happen, of course, but it seems to me another version of a standard complaint: Apple’s device is too limited and limiting to be useful in a university classroom. Whether it is content creation, app censorship, the devices inability to run Flash, or even the inflexible and relatively hack-proof operating system, digital humanists have begun to rally against the iPad as another example of the things wrong with how the computer industry approaches academia. The fear is that the potential of the iPad will ultimately lull us into accepting its limitations and, as a result, limiting the potential for genuinely creative intersection of technology and learning. In other words, the iPad promotes a coarsely transactional approach to teaching and learning and facilitates the highly commodified packets of knowledge move from a relatively inflexible content provider to consumer.
Both of these arguments postulate that the object (Powerpoint and the iPad) exert control over the user in particularly unsubtle ways. Powerpoint somehow makes military briefings boring or suspends critical inquiry. iPads create apparently insurmountable barriers between content consumers (students) and content producers. A little Bruno Latour could go a long way in this context. Both the iPad and Powerpoint exist in a particular network of relations that both influence how this technology is used and will be used. To assume that the iPad will be used on University campuses without some kind of compromise regarding its flexibility and issues of censorship marginalizes the power of university faculty to find or create work arounds, to reject poorly designed devices (just like many faculty members reject poorly designed textbooks or poorly conceived website), or to create pedagogical environments where the strengths of the iPad shine and its limitations are accommodated without sacrificing the teaching or learning objectives.
The same can be said for the Powerpointer. Compared to the tedious practice of preparing, creating, and maintaining collections of photographic slides, The Powerpointer is revolutionary. Moreover, in a critical environment like the university or the military, it can be controlled. Boring Powerpoint presentations likely reflect boring lectures, unnecessary briefings, and a culture of tedium rather than actually producing them. In fact, it may be that The Powerpointer manifests agency by allowing us to recognize the inefficiency of a particular culture or practice of which it is a part.
It is always disappointing to see a piece of technology blamed for its limitations as if technology existed outside the human networks in which it is used. Recognizing the role of technology in establish expectations is a valid form of critique, but a symmetrical approach to understanding technology demands that we give equal consideration to the character of the networks in which the technology will function.