First, I very much enjoyed reading Burnard and Gailey. What was most compelling about these articles, for me, was how they made me think about what it would mean to make one’s interpretive process explicit and sharable. I think I’ve always assumed that, at heart, there is something radically personal, even idiosyncratic, about the experience of encountering a text. So it was refreshing (and strangely satisfying) to read about “decoding” and “re-encoding” a text in explicit and unambiguous terms, and recording human interpretation in a mechanically sharable way (here I’m paraphrasing Burnard). In reading Gailey, I was struck by how the challenges of creating a digitally searchable edition was able to help rethink the assumptions we bring into our interpretive practices, which reminded me of how theories of text are connected to the actual practices and protocols of interpretation, and vice versa (plus, I loved her rumination on the word “search”!)
In both readings, I am fascinated by the brief mention of “ontology,” which is of course the philosophy of classification. Much of my research involves classification and categorization of ancient political structures, but especially coming out of the vaguely post-positivist methodological tradition of classics, there is almost zero methodological, theoretical, or philosophical discussion about how to do such research.
I was astonished to accidentally find out that there was a name, Content Analysis, for the methodology that I had been struggling to develop for the past year and a half for my master’s degree. I have since tried to broaden my methodological bases by looking at social science frameworks, and have also briefly looked into books on ontology, but they were heavily mathematical and I was not sure what to make of it (I wondered if mathematical ontology was itself a warzone between analytic and continental philosophical camps, but I didn’t get too much into it).
Although both papers discuss broadly the philosophical implications, I wonder what other “reinventing-the-wheel” problems we are going to encounter in XML-TEI editing that have already been solved in other fields. After all, it seems one of the primary limitations of digital humanities is the demand to be specialists in two fields, the humanities, and the digital. It already takes years of study to be fluent in the basic modern and pre-modern languages of our research fields. It takes the same to understand the logic of computer programming, but to now deal with ontological problems and their nuances in addition? For that we may be asking too much for one individual to learn, and perhaps that is where “teams” of digital humanities scholars need to come in.
While I don’t innately disagree with Lou Burnard’s statement that artifacts (and by proxy, texts) carry the meanings that we imbue in them when we use them, I do wonder how this perspective might account for the cultural milieu that objects carry with them, particularly those objects like manuscripts that move transtemporally. I certainly would not argue that inanimate objects carry the same agencies as human and animal actors, but I found myself wondering how Burnard would account for object oriented ontologies like Bruno Latour’s actor-network theory. That said, I was drawn to Burnard’s comparison of “’text’ and ‘textile’, between what is written and what is woven.” Manuscripts demand that we account for their physicality, so applying tactile metaphors makes sense. Still, the aura of “weaving” stands out to me since we can weave both tapestries and stories.
It strikes me that Burnard, despite writing in 1998, identifies issues relevant to modern archival work, particularly related to the interpretive nature of markup and the necessity of technological migration. It seems that Burnard sees the interpretive nature of transcription as the critical editions that provide a kind of “best text,” whereas I consider transcription to be a dual nature of both diplomatic and critical editions since they, as Burnard suggests, “reflect differing priorities, differing research agendas, and consequently different markup schemes.” This connects to page seven of Gailey’s article, where she unpacks how the <choice> tag provides both the original text and a regularized version. From previous transcription exercises, I find the <choice> tag to be particularly freeing since it asks the editor to preserve the original language in one tag (<orig>) and then to provide the editor’s interpretation in the regularized (<reg>) tag. The <choice> moments (if you’ll pardon the pun) are where scholars not affiliated with text editing can see literary analysis occur and, for better or worse, how we might be able to justify the labor of text editing as rigorous intellectual labor. I have no doubts that we’re all invested in digital editions of premodern works, so it is worth considering which parts of our efforts are readily translatable to our peers and colleagues.
After reading both pieces by Lou Burnard and Amanda Gailey, I’m especially interested in Burnard’s contention that “text encoding provides us with a single semiotic system for expressing the huge variety of scholarly knowledge,” and in the ways in which both authors suggest that TEI could be used to describe (or, in certain cases, might fail to adequately describe) images. Gailey’s discussion of the illustration of an owl, crow, and jaybird in Harris’s “On the Plantation” provides a helpful example of this. Gailey notes that the illustration is likely the first use of the idiom “naked as a jaybird,” but that this phrase is conveyed through the image of the jaybird juxtaposed with an overdressed crow and owl, and not mentioned in text. In the case of the naked jaybird, Gailey’s editorial intervention is necessary to properly describe the jaybird image so that it becomes searchable. Gailey’s example brought up a few questions for me: what can be communicated and what is lost in the process of describing images for textual encoding? how might description supplement the image itself? Because my own work centers around the relationship textual description and pictorial representation, I’m excited to learn and think more about the ambiguity and interpretive possibilities of describing texts (and images) as a central part of digital editing.
I thoroughly enjoyed reading these two articles. They both clearly articulate the stakes of the conversations surrounding text editing and encoding in ways accessible to specialists and new scholars. Their clarity reveals that the bar for entry into the digital humanities is not as high as is often assumed, but it also allows them to drive home the point that text editing and encoding require the same amount of rigor associated with other forms of scholarship. I found the possibility of layering different scholarly interpretations of the same text to be imminently intriguing, particularly as a means of making academic discourse more transparent for non-specialists.
The readings also encouraged me to think more about decontextualization – not only of the text as a whole vis-à-vis its surroundings and textual contemporaries, but within texts themselves. How can editing and encoding be used to minimize the decontextualization which can often occur in digital searches?
Something that feels undealt with, however, is the question of language itself. What are the difficulties posed by non-Western languages? How might this complicate the choices inherent in the process of editing and encoding?
The assigned readings by Lou Burnard and Amanda Gailey were my first introduction to the Text Encoding Initiative. From the first article I learned about the impetus for establishing the TEI and the vision of one of its founders of its role in the development of humanities scholarship, and in the second I read one scholar’s account of the practical application of the TEI in her humanities projects. Because I have virtually no experience reading about software and encoding, I was happy to find that I was able to follow the history and jargon of the TEI, and also understand the ideals and goals of the TEI’s role in the ongoing development of accessibility and communication between scholars and materials within the larger community of educators and students in the humanities.
Both of these readings have really made me excited to engage in the critical conversation surrounding text encoding. Lou Burnard’s article offers an optimistic vision of text-encoding as a possible “interlingua for the sharing of interpretations” (6), but Amanda Gailey presents some real problems and limitations that challenge Burnard’s slightly utopian ideas. While Burnard speaks with the hope of an ideal “unified approach” (3) that text-encoding allows, I find that Gailey’s call for “deep editing” in light of text-encoding is more tethered to the realities and limitations of applying close reading to digital markup. Gailey points out that XML (TEI) encoding “would almost certainly fail to accommodate several different interpretations of the text coexisting in the same file”(8), which conflicts with Burnard’s hope for a shareable “critical consensus”. Indeed, as many of us know, critical consensus is rare at best in our respective fields, and Gailey is speaking our language when she voices skepticism about an unproblematic vision of what text encoding can be. We are made to confront the fact that a full encoded text, including analysis, would need to be able to accommodate multiple scholarly perspectives.
Another important limitation that Gailey brings up is the problems inherent in encoding a fully “searchable” document by modern standards which, as opposed to old fashioned close reading, involves a cursory glance or term-based search of certain themes or topics. Often, an author – especially if they are working in a literary tradition – will not explicitly mention the topic they are discussing whether for hyperbolic or metaphorical effect – as in Whitman’s “O Captain! My Captain!”, which refers to Abraham Lincoln without mentioning him by name (Gailey 6) – or simply because of some compositional circumstance that allows for its omission – as in the Jaybird pun in Joel Chandler Harris’ tale that Gailey discusses (13). These hidden or obscured meanings (and here Gailey is exceptionally insightful) need to be made explicit for researchers through the use of encoding, or they risk being merely a reliable digital text without any of the potential that text encoding makes possible.
Gailey’s problems with how encoding will affect the canon and perhaps present a “underdeveloped or optimistically skewed” (16) idea of the past are also interesting and highly relevant to the practice of digitizing texts – prompting questions familiar to anyone who has worked with the study of the literary canon such as which texts do we choose to digitize, and why? What do we leave out, and why? I look forward to working through these ideas and more with all of you!