Electronic Textual Editing: Editors' introduction [ Lou Burnard (Oxford University & Text Encoding Initiative); Katherine O'Brien O'Keeffe (Notre Dame University & Committee on Scholarly Editions ; John Unsworth (University of Virginia & Committee on Scholarly Editions & Text Encoding Initiative). ]


Contents

Ever since the invention of the codex the long and distinguished history of textual editing has been intimately involved in the physique of the book. The format of that remarkable invention, less fragile by far than the scroll and amenable to a more rapid retrieval of information, has determined, until the present, the ways in which writers brought texts into the world and readers encountered them. In obvious but also subtle ways, the physique of the book and its economics have both enabled scholarly textual recovery and set limits on it. Certainly, the carefully elaborated sets of rubrics for the recovery of textual artifacts (whether addressing problems of Greek tragedy, Jewish or Christian scriptures, medieval vernacular literatures, early modern drama, or the novel) were substantially governed by the realities of book format. Notably, these required considering how the material to be edited could be represented within the confines of the page and recognizing practical limits on the plethora of information that might be brought to bear on a textual problem. Such limits were not merely matters of structural design (that is, replacing the manuscript's marginal gloss by foot- or endnotes or appendices; the introduction of split levels for an apparatus criticus; space limitations on synoptic presentations). Such a book had to be bindable, liftable, and, perhaps most important, affordable. The scholarly debates over what sort of editions to produce— whether favoring the textual object, the author of the text, or the text's reception history—were driven as much by economics as by ideology. Quite simply, one could not have it all.

The rapid spread of computing facilities and developments in digital technology in the eighties and nineties offered the possibility of circumventing a number of practical (both physical and economic) limitations posed by the modern printed codex. Over the last two decades, it has become increasingly evident that the written word, in all its manifestations, has taken on a digital form. The implications of this adoption appear to be as radical as those of the codex itself. This metamorphosis, if it is one, has naturally been most keenly debated in the home of the written word: the world of scholarly editing and textual theory. The debate has involved practitioners at either end of a spectrum that runs from metaphysical speculation on the nature of textuality at one extreme, through questions of editing theory, to pragmatic concerns about machinery, software, mark-up and best practice at the other. The present volume offers, we hope, an emerging consensus about the fundamental issues in the emergence of electronic textual editing, together with guidance on accepted current wisdom

Coincident with the spread of computing facilities, and their adoption as the basic means of communication amongst academics at all levels, has been an extraordinary democratization in the production of textual editions. Professional academics, researchers, students, and enthusiasts at all levels and from many different fields frequently put texts online for teaching or research purposes. The democratization of publishing through access to the internet has not brought with it, however, a concomitant broadening in the reliability of such editions. And the index of a text's reliability is unfortunately inversely proportional to its innocence of the canons of editing. In the light of these realities, the world of possibility presented by individual electronic publication raises a question and a challenge. What is the point of contact between the canons of textual editing, formulated as they have been for the technology of the printed word in the codex, and emerging possibilities of the digital text? Through what structures can we imagine a new form of editing whose limits, theoretical, practical, and economic, are other than those of the printed book? And the challenge: to make available to prospective editors—either to those approaching the task for the first time or to seasoned veterans of print—the kinds of information they must have to engage with electronic textual editing at the level of needed knowledge, conceptual and practical. Currently, such information is thin on the ground. While there is a rich literature on virtually any kind of scholarly editing designed for the printed book, the fruit of multiple experiments in electronic scholarly editing remains, substantially, at the level of individual experience, and when that experience is shared in published form, it tends to be shared in the form of theoretical speculation, rather than as practical guidance.

The Committee on Scholarly Editions

The publicly distributed version of the Guidelines of the Modern Language Association's Committee on Scholarly Editions (CSE) that was in use up until the publication of this volume was last revised in 1992, and it took the form of an essay setting out a mixture of principles and best practices, and a checklist of very specific ‘guiding questions’ for the vettor of a print edition (and, by implication, for its editor). The essay consisted in seven pages of outline distributed under four large headings: ‘Conception and Plan of Volume/Edition’; ‘Editorial Methods and Procedures’; ‘Parts of the Edition’;and ‘Preparation for Publication.’ Interestingly, the largest part of the Guidelines was ‘Parts of the Edition’ (not methods and procedures), detailing considerations for the production of Text, the components of the Textual Essay, the Critical/Textual Apparatus, and Extra-Textual Materials. The Guidelines themselves were meant to be, and indeed were, useful within this understanding of the task of scholarly editing. And although the 1992 Guidelines clearly attempted to be catholic in recognition of variation, they show their pedigree descending from the copy-text theory driving the Center for Editions of American Authors (1963-1976) — though in becoming the CSE, the CEAA broadened its purview to include other kinds of editions and reinvented itself as a committee designed to offer advice to editors with work in progress, as well as commissioning evaluation of ready-to-be published editions.

The four major divisions of the 1992 checklist occupied some six printed pages, a brief 20 lines of which were devoted to electronic media, which was imagined only as a kind of handmaiden to the traditional editorial procedures of making a book. The primary consideration under ‘Use of Electronic Files’ was the three areas of understanding the editor and the publisher had to agree on: 1) choice of software (with attention to linkage of notes and non-standard characters); 2) whether the ‘electronic files’ were simply to supply date for the galleys or were to supply page makeup; 3) who would be responsible for final changes or corrections—editor, publisher, or typesetter. Other considerations were more in the line of reminders or helpful hints: electronic files to drive the typesetter still required proofing; hard copy was required as well as disks; electronic files required considered archiving. The last item (number 5) under ‘Use of Electronic Files’ should be quoted in full, ‘Consideration should be given to publication of the edition on floppy disks, CD-ROM, or other electronic text formats.’ It commands our interest not simply because it points to how far we have come (who today could imagine publishing on floppy disks?), but because it identifies precisely the presupposition behind the 1992 Guidelines—that an edition was a print-bound object, and that medium defined and dictated the procedures for producing a ‘scholarly edition.’

In December of 1993, Peter Shillingsburg produced a document for the CSE called ‘General Principles of Electronic Scholarly Editions,’ and in 1997, Charles Faulhaber advanced that effort by bringing the 1993 document forward as a separate set of ‘Guidelines for Electronic Scholarly Editions,’ 1 more or less in parallel with the Guidelines for (Print) Scholarly Editions. The difficulty with this sort of solution to the short shrift of the 1992 guidelines was that it reified, although in admittedly more detailed and more helpful a fashion, the split that the 1992 Guidelines took for granted: there were ‘Scholarly Editions’ and ‘Electronic Scholarly Editions.’ Further, because many of the same principles applied in both media, parallel guidelines raised the problem of coordinating additions and revisions across two documents. An outside review of the draft Guidelines for Electronic Scholarly Editions, conducted by a group of editors working on electronic scholarly editions, produced the suggestion that there should be only one set of guidelines, and that they should be structured to facilitate maintenance—in particular, the addition of new examples and frequent revisions to reflect changes in technical best practices. More specifically, that review committee recommended that the CSE develop a three-tiered document to address both print and electronic scholarly editing. The first tier would be a brief document stating at the general level the necessary characteristics of the scholarly edition. The second tier would address best practices within specific traditions of editing and with respect to particular kinds of source material. The third tier would offer a two-part technical best practices manual, one part for best practices in producing print editions, and one part for best practices in producing electronic editions. The Guidelines that are included in this volume have been thoroughly revised and restructured in accordance with their advice, and so represent an important recognition on the part of the CSE, the MLA, and the community of scholarly editors represented by those organizations, that scholarly editing as an intellectual activity is independent of the medium of publication, even if the methods used to achieve reliability in an edition may vary somewhat by medium.

The Text Encoding Initiative

An international and interdisciplinary standards project, the Text Encoding Initiative (TEI) was established in 1987 to develop, maintain, and promulgate hardware- and software-independent methods for encoding humanities data in electronic form. Even in 1987, it was clear that without such an effort the academic community would soon find itself overwhelmed by a confusion of competing formats and encoding systems. Part of the problem was simply a lack of opportunity for sustained communication and coordination, but there were more systemic forces at work as well. Longevity and re-usability were clearly not high on the priority lists of software vendors and electronic publishers, and proprietary formats were often part of a business strategy that might benefit a particular company, but did so at the expense of the broader scholarly and cultural community. At the end of the eighties there was a real concern that the entrepreneurial forces that (then as now) drive information technology forward would impede such integration by the proliferation of mutually incompatible technical standards.

The TEI Guidelines, like the CSE Guidelines, outline a set of best practices, but they also embody those best practices in a formal and computable expression, originally constructed using Standard Generalized Markup Language (SGML), and since the 4th revision of the Guidelines (published in 2002), expressible in eXtensible Markup Language (XML) as well. The TEI Guidelines are an extraordinary example of international interdisciplinarity, having been produced by hundreds of scholars from many different humanities disciplines, working in dozens of workgroups over more than fifteen years to specify a formal representation for what were considered the most important features of literary and linguistic texts.

The TEI Guidelines: today take the form of a substantial 1300 page reference manual, documenting and defining some 600 elements that can be combined and modified in a variety of ways for particular purposes. Each such combination can be expressed formally as a kind of document grammar, technically known as a ‘document type definition’ (DTD). The size of the Guidelines would be daunting, were the TEI encoding scheme not highly modular. The designer of a TEI DTD reviews the available ‘tagsets’ (modules, each containing semantically related element definitions), choosing how they are to be combined. Individual elements may be renamed, omitted, or modified, subject only to some simple architectural constraints. The TEI maintains web-accessible software (for example, the ‘TEI Pizza Chef’) that helps users in carrying out this task. The size of TEI Guidelines in book form is also somewhat daunting, particularly as they have gone through four editions since 1990 2 . The digital form of the Guidelines has always been widely available without constraint on the Internet and should be consulted for up to date authoritative information (see http://www.tei-c.org/ ).

The work of the TEI has been endorsed by many organizations, including the US National Endowment for the Humanities, the UK's Arts and Humanities Research Board, the Modern Language Association, the European Union's Expert Advisory Group for Language Engineering Standards, and many other agencies around the world that fund or promote digital library and electronic text projects. The impact of the TEI on digital scholarship has been enormous. Today, the TEI is internationally recognized as a critically important tool, both for the long-term preservation of electronic data, and as a means of supporting effective usage of such data in many subject areas. It is the encoding scheme of choice for the production of critical and scholarly editions of literary texts, for electronic text collections in digital libraries, for scholarly reference works and large linguistic corpora, and for the management and production of item-level metadata associated with electronic text and cultural heritage collections of many types.

Electronic Textual Editing

There is work for a generation or more of textual editors in the transmission of our cultural heritage from print to electronic media, but if that work is to be done, then a rising generation of scholars will need to receive professional credit for doing it. In order for that to happen, tenure and promotion committees will need to evaluate work of this kind. Our aim is to address that need, at both scholarly and technical levels: in this volume, the updated version of the CSE Guidelines and the most recent release of the TEI Guidelines frame a wide-ranging collection of essays that covers both practical and theoretical issues in electronic textual editing. The need for such a volume is immediate: there are currently few manuals, summer courses, or self-guided tutorials that would help even trained textual editors transfer their skills from print to electronic works 3 Put another way, the evidence of need is to be found in the tens of thousands of poorly selected, unedited, and (most often) unidentified editions of literary texts one can find instantly on the Web. In response to that situation, our main goal has been to encourage careful work in the production of new digital editions by providing scholarly editors with pragmatic advice from expert practitioners (who are also internationally respected authorities in this expanding and interdisciplinary field), as well as by reproducing and uniting the best standards so far developed for such work. And this volume may also serve a another useful purpose in recording how, in the present moment, the community of textual editing is changing and evolving in response to the emergence of new technologies.

With all of that in mind, we ask the casual reader—who may at first glance detect only a cacophony of unrelated specialist voices here—to look a little deeper, and to seek out the common concerns that link all the contributors. The volume has four major sections. In pride of place, we provide a complete revision of the MLA's CSE Guidelines for Editors of Scholarly Editions. This revision updates the checklist for vettors of print editions and adds a glossary to explain the terminology of the checklist (revised and explicated by Robert Hirst, UC-Berkeley); it also includes a new checklist and glossary specifically aimed at vettors of electronic editions, compiled by Morris Eaves and John Unsworth, and a detailed annotated bibliography covering the whole field of editorial methods, compiled by Dirk van Hulle, from the University of Antwerp. Then, after the editors' brief discussion of the most basic principles of scholarly editing, the bulk of this volume consists of 26 contributed essays, which we have grouped together under two headings: the first, material and theoretical approaches; the second, actual practices and procedures. In each section, the contributors ground their discussions in their own practical experience.

The section on Sources and Orientations is appropriately opened by a joint contribution from two leading theorists of textual editing from complementary fields of expertise, Dino Buzzetti (medieval Philosophy) and Jerome McGann (nineteenth-century English Literature). Their reflections on the disciplinary transformations brought on by the development of powerful digital tools demonstrate in compelling ways the new opportunities and the new problems arising from ‘born-digital artifacts.’ There follow eleven case studies exploring specific problems and solutions for electronic editing in different genres, disciplines, and media. Editing from manuscript materials and fragments constitutes in itself a set of particular problems. Peter Robinson draws on his experience editing Chaucer for CD-ROM to offer a series of ‘lessons’ for anyone contemplating the edition of a medieval text with multiple witnesses. Quite another set of challenges for electronic editing is posed by historical documents: Bob Rosenberg uses the Edison papers (comprising some 1.5 million documents) to explore the problems and decision processes involved in the documentary editing of the papers and drawings of so prolific an inventor. In both cases, the computer's ability to handle, integrate and manipulate vast amounts of disparate materials opens up new possibilities of access. The field of epigraphy presents a range of problems and solutions, applicable also to all forms of surviving writing whether complete or fragmentary. Anne Mahoney, in ‘Epigraphy and the TEI,’ addresses digitization projects in Greek and Latin inscriptions, and discusses the extent to which it is possible to preserve both information and its interpretation in such a context.

Next, electronic editing strategies for print-based texts are addressed in eight genre-based chapters. In ‘The Poem and the Network: Editing Poetry Electronically,’ Neil Fraistat and Steven Jones, co-editors of the collaborative Romantic Circles Website, define the particular challenge of electronic editing: ‘to produce an electronic edition that doesn't simply translate the features of print editions onto the screen, but instead takes advantage of the truly exciting possibilities offered by the digital medium for the scholarly editing of poetry.’ By framing the questions before scholarly editors of letterpress editions in the terms of an electronic medium, they address the fundamental questions that electronic editors of poetry must answer before beginning their task. Drama is a genre with problems both overlapping those of poetry and distinct from them. David Gants, illustrates these problems from the early modern theatre with ‘Electronic Textual Editing, Drama Case Study: The Cambridge Edition of the Works of Ben Jonson..’ The Cambridge edition is a distinctly hybrid edition, in that it is envisioned as two separate projects—a six-volume edition in print form and a networked electronic edition which is expected to grow over time. The goal of the second of these two projects is to realize as fully as possible the potential of the electronic medium, and Gants pays special attention the problems of encoding peculiar to drama as a genre. In ‘Prose Fiction and Modern Manuscripts: Limitations and Possibilities of Text-Encoding for Electronic Editions,’ Edward vanHoutte defines an electronic edition and its aims and offers as a case study in fiction his electronic edition of the classic Flemish novel, De teleurgang van den Waterhoek by the Flemish author Stijn Streuvels.

Two essays consider the possibilities of electronic editing for non-fiction texts. Claus Huitfeldt's contribution, ‘Editing Philosophy: Wittgenstein's Nachlass—The Bergen Electronic Edition,’ sets out the thinking behind the design of a documentary edition, containing a facsimile, diplomatic transcription, and normalized transcription, for Wittgenstein's manuscript Nachlass: . In setting out the procedures for ensuring consistency across the three parts of the edition, Huitfeldt also shows how such a project recapitulates the classical philosophical problems of representation and interpretation. Some pressing considerations for the electronic editing of religious texts are offered by David Parker in ‘Electronic Religious Texts: The Gospel of John.’ In surveying how to define a religious text and how to treat scriptures considered sacred, his essay ranges widely across the peculiar difficulties of editing scripture, no matter the medium. His essay might be considered a ‘counter-case’—a cautionary tale of editing in a sea of variants. Authorial translations constitute a separate case for editing, and Dirk vanHulle offers in ‘Authorial Translation: The Case of Samuel Beckett's Stirrings Still / Soubresauts ’ the case of a text in twenty versions crossing two languages. In this case we see the development of a genetic electronic edition that aims to capture the work in all its states. VanHulle shows how the complex genesis of Stirrings Still might be approached through different traditional editorial methods and gives a comprehensive presentation of the ways in which this edition represents Beckett's complex text.

The final two essays in this section are composites of different kinds. Morris Eaves draws on his experience with the Blake Archive as he explores the problems peculiar to a multi-media electronic edition of a single author, with particular attention to the technical and intellectual problems posed by the need to achieve a balance between ‘art’ and ‘text’ in an electronic scholarly edition. By contrast, Julia Flanders presents the case of a large, multi-author collection. She draws on her experience migrating an editing project from print to electronic form in ‘The Women Writers Project: A Digital Anthology.’ This collection, ‘which has been described variously as an ‘archive,’ an ‘edition,’ and an ‘anthology’ and serves, in a sense, the purposes of all three.’ She shows how the digital anthology in its variety may serve as the ‘scale model’ of the digital world.

The section entitled Practices and Procedures groups together essays where the main focus is a particular practical issue of general importance to those undertaking digital textual editing. Inevitably, some readers may find portions of what is discussed here overwhelmingly technical, but we have tried to retain accessibility without rendering discussion vacuous. This part opens with a very accessible essay on Effective Methods of Producing Machine-Readable Text from Manuscript and Print Sources, which contrasts the practical experiences of two major digital editing projects: Hoyt Duggan on the Piers Plowman Project and Eileen Fenton on the JSTOR service. The essay demonstrates both unexpected similarities and differences in the not-so mechanical processes of transforming medieval manuscript and modern print materials into digital form. Ensuing chapters address the question of Levels of Transcription (Matthew Driscoll), who discusses how TEI may be used in the transcription of letter-forms, abbreviations, structure, layout, emendations, and other features of a manuscript text. Kevin Kiernan also addresses working with manuscripts, but in this essay the focus is on issues raised by the use of digital facsimiles in editing. Presenting the possibilities of the digital, image-based edition of early medieval manuscripts, he shows how high-resolution facsimiles and digital restoration allow editors to represent the historical state of primary texts far better than would be possible with print technology. His considerations of the promise of such editions also point to the need to bring the humanities and computing science together to develop more flexible tools for image search and textual encoding. Kiernan's essay is followed by an essay on ‘Authenticating an Electronic Edition’ from a group of editors at the Australian Centre for Scholarly Editions (Paul Eggert, Phil Berrie, Graham Barwell, and Chris Tiffin), exploring the difficult question, ‘how can textual reliability be maintained in the electronic environment?’ Next, Greg Crane explains the inner workings of the Perseus Digital Library System, one of the oldest and largest collections of electronic editions. Perseus—originally focused on, and still best known for, editions of classical-era texts—has for nearly two decades grappled with changes in language technology. In ‘Writing Systems and Character Representation,’ Christian Wittern explains in lucid detail where those technologies stand today, shows how text encoding is built on character encoding, and demonstrates the importance, to editors, of understanding how character encoding actually works. In ‘How and Why to Formalize Markup,’ Patrick Durusau explains why it is important for electronic textual encoding projects, no matter how small, to record and explain the choices they make as they work their way through applying the TEI (or any other markup scheme) to the editorial problems that their texts present. This section closes with Sebastian Rahtz's convincing demonstration of what can be achieved using current standards-based Web technologies to store, analyze, and display digital texts.

This part of the volume contains some non-technical discussion as well. Perhaps surprisingly, we include a short discussion of circumstances in which the TEI might not be an appropriate solution by John Lavagnino, because we recognize that all representational schemes, including the TEI, must be informed by an ontology that will, in some cases, be inadequate, inefficient, or inappropriate. This section also presents a detailed meditation on the experience of moving a print-based Editorial Project into Electronic Form by Hans-Walter Gabler, which we hope will be useful to the growing number of editors who find themselves in analogous positions. The volume concludes with two essays concerned with important issues raised by the new modes of publication and distribution. No scholarly editor can afford to proceed far along with a project without some basic understanding of questions of copyright and contracts. In their detailed and indispensable essay, ‘Rights and Permissions in an Electronic Edition,’ Mary Case and David Green review the relevant law and its implications for scholarly editions—both for authors and editors. Finally, Marilyn Deegan addresses questions of what editors can do to facilitate library collection and preservation of their electronic editions.

No printed book that deals with information technology can entirely avoid obsolescence, and in this case, we fully expect that certain parts of the volume will be outdated, if not by the time that the book is published, certainly before it has been in print for even a year or two. For that matter, even if we begin counting the history of electronic scholarly editions with Father Busa's punch-card Aquinas in the 1940s, we are only a few decades into developing an understanding of how to make and use electronic documents in general, or electronic scholarly editions in particular. It took five hundred years to naturalize the book, and a hundred and fifty years to develop the conventions of the scholarly edition in print. Those schedules reflect the time required for social, not technological change, and while the acceleration of technological change in this case may rush the social evolution of rhetoric for digital editions of print and manuscript sources, it will still be generations before the target of this volume stops moving. And even before that happens, as Matt Kirschenbaum has pointed out, we will soon be grappling with the problem of editing primary sources that are themselves digital—a problem with entirely new practical and theoretical dimensions (‘Interface’). Precisely because, in these circumstances, no book can be definitive and no rules or guidelines can be the last word on their subject, we need organizational mechanisms for the continued maintenance, development, and dissemination of standards and best practices. The CSE and the TEI are two such mechanisms. In closing, then, the editors would like to point out that both organizations depend on the work of individuals and the support of institutions to persist and to carry on their work, and we invite you to membership and participation.

Notes
1.
Peter Shillingsburg, ‘General Principles for Electronic Scholarly Editions,’ December 1993 ( http://sunsite.berkeley.edu/MLA/principles.html ); Charles Faulhaber, ‘Guidelines for Electronic Scholarly Editions,’ December 1997 ( http://sunsite.berkeley.edu/MLA/guidelines.html ).
2.
At the time of writing, a fifth major revision known as TEI P5 is in preparation.
3.
The recently-published Blackwell's Companion to Digital Humanities: , co-edited by Susan Schreibman, Ray Siemens, and John Unsworth, does include a chapter describing electronic scholarly editing, by Martha Nell Smith; however, since the Companion's intent is to serve as a textbook or reference work covering the entire field of ‘humanities computing’, it does not offer in-depth coverage of any particular practice within that field.

Copyright TEI Consortium. Except where otherwise noted, content on this site is licensed under a Creative Commons Attribution 3.0 Unported license and a BSD 2-Clause license.
Last recorded change to this page: 2007-10-31  •  For corrections or updates, contact webmaster AT tei-c.org