Burnard proposes that markup makes explicit “a theory about some aspect of the document” and “maps a (human) interpretation of the text into a set of codes” and so “enables us to record human interpretations in a mechanically shareable way.” Though I think he rightly points out that text encoding does (to some extent) render visible the editor’s interpretive framework, his stance elides the circumstantial constraints attendant upon the use of markup languages. The mechanical shareability that Burnard associates with markup requires that interpretive acts be in some ways limited by the formal language of the markup. The ambiguity (or, perhaps, polysemy) which often is at the heart both of interpretation and of ‘theories of the text’ can simply yield a ‘validation error’ when encoded digitally. These issues and others related are central to Gailey’s analysis and her plea for “heavy editing.” Burnard suggests that this “single formalism” ultimately reduces complexity and facilitates a “polyvalent analysis,” yet he acknowledges that this depends on a single, unified encoding scheme. Burnard’s optimism regarding the power of digital encoding is, in my view, in many ways justified, though I think his comment about mechanically shareable “maps of (human) interpretation” should be qualified in this way.
Recent Comments