Both of these readings have really made me excited to engage in the critical conversation surrounding text encoding. Lou Burnard’s article offers an optimistic vision of text-encoding as a possible “interlingua for the sharing of interpretations” (6), but Amanda Gailey presents some real problems and limitations that challenge Burnard’s slightly utopian ideas. While Burnard speaks with the hope of an ideal “unified approach” (3) that text-encoding allows, I find that Gailey’s call for “deep editing” in light of text-encoding is more tethered to the realities and limitations of applying close reading to digital markup. Gailey points out that XML (TEI) encoding “would almost certainly fail to accommodate several different interpretations of the text coexisting in the same file”(8), which conflicts with Burnard’s hope for a shareable “critical consensus”. Indeed, as many of us know, critical consensus is rare at best in our respective fields, and Gailey is speaking our language when she voices skepticism about an unproblematic vision of what text encoding can be. We are made to confront the fact that a full encoded text, including analysis, would need to be able to accommodate multiple scholarly perspectives.

     Another important limitation that Gailey brings up is the problems inherent in encoding a fully “searchable” document by modern standards which, as opposed to old fashioned close reading, involves a cursory glance or term-based search of certain themes or topics. Often, an author – especially if they are working in a literary tradition –  will not explicitly mention the topic they are discussing whether for hyperbolic or metaphorical effect – as in Whitman’s “O Captain! My Captain!”, which refers to Abraham Lincoln without mentioning him by name (Gailey 6) – or simply because of some compositional circumstance that allows for its omission – as in the Jaybird pun in Joel Chandler Harris’ tale that Gailey discusses (13). These hidden or obscured meanings (and here Gailey is exceptionally insightful) need to be made explicit for researchers through the use of encoding, or they risk being merely a reliable digital text without any of the potential that text encoding makes possible.

     Gailey’s problems with how encoding will affect the canon and perhaps present a “underdeveloped or optimistically skewed” (16) idea of the past are also interesting and highly relevant to the practice of digitizing texts – prompting questions familiar to anyone who has worked with the study of the literary canon such as which texts do we choose to digitize, and why? What do we leave out, and why? I look forward to working through these ideas and more with all of you!

-Ryan Gerber