Month: October 2019 (page 3 of 3)

Pre-Workshop Reflection

I found Gailey’s discussion of the limitations of digital humanities really interesting.  She explains how, even as digitization opens up new avenues of research and enables a wider audience to access difficult or rare texts, the act of digitizing requires decisions on what data to present and how that inevitably obscure aspects of the original.  Gailey offers an example in the difficulty of rendering phonetic dialects.  Digitization in this case was an attempt to make a particular work universally available and comprehensible; but this could not be done without altering the work.  The solution to code “original” and “regularized” text was an innovative solution that drives home the point that digitization must be taken in a case by case basis.  I’m very excited to learn about the particular difficulties presented in digitizing medieval manuscripts and the creative solutions people have discovered!

Homework Instructions (see Handout 7 for more detailed instructions)

Saturday night (10/19), you have three tasks to complete.  You may do these individually, or arrange with your partner to tackle them together.

  1. Make sure you have the digital image of our manuscript downloaded onto your computer/easily accessible.
  2. Look over the portion of the MS that you have been assigned.  Using the resources you have been given today, transcribe at least three lines of your portion of the manuscript.  Be methodical: work slowly and carefully!
  3. Look over the whole of the section of the manuscript you have been assigned.  Observe the elements that you want to include in your marked-up text. (Note: this is not a question about cataloging information – don’t worry about the support, dimensions, etc.  Just focus on the text/images that you want to include in your text markup.) Select at least five of the things you’d like to mark up in your section. Alongside each of your chosen features, include a suggestion of how we might tag them in TEI.  For some there will be one clear option, for others there might be a number of possibilities.

***As you compile the list, you will find it useful to browse the TEI site: to figure out how you might tag each element. 


We will discuss these tomorrow; you will be asked to defend your choices, and why they should become the standard for the whole manuscript.


Post your short list of potential tags on the class blog Remember to mark this post by selecting the category “Homework” before you post it so that it appears on this page!  Feel free to comment on other people’s suggestions.


We look forward to seeing you tomorrow!

Pre-Workshop Reflection

The Burnard and Gailey articles brought to my attention several aspects of digital editing and text encoding which I had not previously considered. I found Burnard’s focus on the hermeneutic aspects of text encoding extremely interesting, as well as his classification of texts as simultaneously being images, linguistic constructs, and information structures; all three of these dimensions have bearing on how we interact with texts as both producers and users of digital editions, and navigating their interplay poses a significant challenge. I also appreciated Gailey’s discussion of the rather contradictory meanings of “search” in use today (“to thoroughly explore everything, to scrutinize it, or to simply ask a computer whether something contains a piece of information, without ever looking at it at all,” 125-6), an Internet-related semantic shift that I had not previously considered. Her examples of significant observations made throughout the digitization process for the Whitman Archive (such as Andrew Jewell’s discovery that the glue stains on a UVA manuscript matched one at Dartmouth!) brought into focus the ways in which our close engagement with a given text throughout the encoding process––one that, by nature, involves “searching” in its older definition––can provide entirely unexpected insights.

Pre-Workshop Reflection

For me, Burnard’s and especially Gailey’s articles highlighted the adaptability and flexibility of digital text editing. Gailey departed from how the verb ‘search’ took on an altered meaning and became a contronym during the digital era of search engines. What I wonder about is how we can relate their statements to our field, and ponder about how digital editing will change our ways of editing and perceiving medieval manuscripts. Gailey’s thoughts highlighted the problems of the unavoidably selective process of interpretation during editing and the differences between the readers of different eras. These are especially relevant problems for medieval sources, and, as a medieval art historian, I wonder how we can confront them when imagery is also included.

Text in the Age of Mechanical Shareability: A comment

Burnard proposes that markup makes explicit “a theory about some aspect of the document” and “maps a (human) interpretation of the text into a set of codes” and so “enables us to record human interpretations in a mechanically shareable way.” Though I think he rightly points out that text encoding does (to some extent) render visible the editor’s interpretive framework, his stance elides the circumstantial constraints attendant upon the use of markup languages. The mechanical shareability that Burnard associates with markup requires that interpretive acts be in some ways limited by the formal language of the markup. The ambiguity (or, perhaps, polysemy) which often is at the heart both of interpretation and of ‘theories of the text’ can simply yield a ‘validation error’ when encoded digitally.  These issues and others related are central to Gailey’s analysis and her plea for “heavy editing.” Burnard suggests that this “single formalism” ultimately reduces complexity and facilitates a “polyvalent analysis,” yet he acknowledges that this depends on a single, unified encoding scheme. Burnard’s optimism regarding the power of digital encoding is, in my view, in many ways justified, though I think his comment about mechanically shareable “maps of (human) interpretation” should be qualified in this way.

Medieval Hypertexts and Division of Scholarly Labor

As I read Gailey’s reflections on the aspects of a text that markup-assisted “distant searching” may miss (like the identification of Whitman’s “my Captain” with Abraham Lincoln), I was reminded of an iconic moment of misreading from Piers Plowman: Mede, having triumphantly cited “as holiwryt telleth / Honorem adquiret qui dat munera &c” (B.3.342-3), is rebuked that if she had only turned the page, she would have found a “teneful tixte” that reverses her intended significance (B.3.344-53). This passage, like others in the poem, demonstrates the crucial role of that “&c” as a kind of manuscript hyperlink to another text; in order to interpret faithfully, medieval readers were often expected to supply entire verses or passages from memory, even when an incipit alone appears in the text at hand.

Just as Gailey and her colleagues choose whether or not to translate dialect, editors must choose more or less interventionist supports to the reader: identifying those snatches of Latin, tracing them to their sources through an intervening manuscript history, supplying the lines left implicit in the “&c”? If we go beyond the quoted words, how do we know when the intended passage ends? Does “eloi, eloi, lamach sabathani” call up the entirety of Psalm 22 (Vulgate 21), or is it an unintelligible performative utterance that evokes power through its historical connection to the cross of Christ—or both, in different contexts and for different readers? If we trace not only verbatim quotations but also allusion and resonance, the chains or networks of intertextuality are potentially infinite, and in determining where to cut them we very quickly enter the realm of interpretation.

Many scholars have noted the similarities between manuscript culture and hypertext, whether in the shifting mouvance of scribal variation or the hovering commentaries of the Glossa Ordinaria. The case of incomplete texts or incipits that summon their previous contexts highlights the most important difference between medieval techniques and modern technologies for linking texts. As Carruthers shows, the culture of memory that made manuscript “hyperlinks” possible was fundamentally moral in orientation; memnotechnique was a means of disciplining the mind, internalizing wisdom, and embodying the virtue of one’s readings. Although over the centuries, medieval readers were increasingly assisted by indexes, concordances, and other apparatus, this ideal of memory as an ethic practice endured into the early modern period. Mede’s failure to turn the page is thus not only a hermeneutic blunder but a sign of vice, as she self-interestedly cherrypicks prooftexts instead of reverently hearing and obeying the indivisible word of God. (This idea, of course, lives on in secularized guise: we teach close reading as a formative practice through which students develop salutary capacities of attention, care, and responsibility.) Technological, rather than mnemonic, “links” remove this ethical dimension of the reading practice, as certain responsibilities are externalized to digital tools rather than internalized as formative disciplines.

Or do they? What I found most interesting in these readings is the suggestion that digital editing does not so much reduce the attentive labor of close reading as redistribute it, so that the producers of an archive engage in meticulous, forensic examination of the text on behalf of its users, who can search or scan at a distance only on the strength of the producers’ previous interpretative work (e.g. Gailey 128). In this attentional economy, specialized division of labor allows novices to profit from the skills of experts, increasing the total efficiency of output. The possible objection that this deprives novices of certain morally or intellectually important practices is ultimately rooted in the Protestant conviction that each believer must search the Scriptures for himself or herself. Many medieval Christians, in contrast, accepted a division of labor between the estates, or differentiation of the members of Christ’s body, by which the clergy could study and interpret on behalf of lay people who benefitted from their labors, reading “distantly” (through sermons, stained glass, and heavily interpretive translations) rather than “closely” (the bare text). So digital technology may replicate manuscript culture, not only in its prolific hypertextuality, but also in underlying assumptions about the distribution of scholarly labor that encourage experts to mediate meaning for nonspecialist readers, who have no particular need to access the raw text. 

Pre-Workshop reflections

As a digital humanities neophyte, I found the articles by Burnard and Gailey to provide both a straightforward overview of terms and processes, and a broad analysis of the benefits and pitfalls of text encoding and tagging. Gailey’s example of her team’s editorial decisions regarding the distinct (controversial) dialect of Joel Chandler Harris’s works made me wonder about how the DEMMR team (and other medievalist digital humanists) treat common idiosyncratic features of medieval texts, such as spelling inconsistencies, abbreviations and corrections. How, in other words, is what we call the “critical apparatus” translated into the digital realm? I look forward to learning more about this in our workshop. Burnard’s call for a standard text markup procedure also made me consider the commentaries and “tags” that medieval readers to which medieval readers would have had access–both on the page in the form of commentaries (such as editions of Bibles in which the text is surrounded by a Glossa Ordinaria or other commentary),  illuminations or illustrations and corrections–and in their mind’s eye (references and connections to liturgy and scripture, for example). Not to be too trite, but I wonder how, in some ways, the creation of three-dimensional editions might sometimes draw us closer to, rather than further from, medieval reading experiences.

Pre-Workshop Reflection

Bernard emphasizes standardized markup techniques as the logical progression of the study of humanities, falling in line with the practices that have always been at its core.  Gailey examines both what is lost and gained through digitization, and the sorts of decisions we have to make during the process of digitizing in order to account for these new kinds of interactions.  On one hand, it is a more straightforward process to “search” texts.  On the other hand, because we are often no longer doing the act of searching, indirectly related items or aspects of an object that are difficult to classify can slip through the cracks.  I thought it was interesting to consider the purpose of certain digitized objects and what sort of details are included in order to accommodate its purpose.  In both pieces, there is an interest in the editing/interpretive practice embedded in the very practice of encoding texts.  In certain circumstances interpretive practices are more obvious (Gailey’s point about Abraham Lincoln and “O Captain!”), whereas others are so conventional or accepted that their interpretive elements seem less obvious (“what constitutes a poem, a stanza, or even a word?”) (Gailey 132).  These will also be important details to keep in mind as we make our own editorial choices, whether they seem big or small.

Authentic [Digital] Manuscripts

Lou Burnard pushes against the notion of digitized-manuscript-as-facsimile, implying that digital humanists should not try to reproduce a material manuscript on a screen, as that involves defining an impossible authenticity, a nebulous “higher purer reality.” Burnard’s vision of a single, structured encoding system for manuscript digitization does, however, include standardizing markup in such a way as to create a more predictable experience with a digitized manuscript. A standard markup practice, if it is widely implemented, has the potential to become practically invisible over time as users begin to take it for granted. If we learn to expect certain manuscript viewing experiences on our screens, then readers might eventually see through markup (or its effects) whenever necessary in order to access the text in a slightly more direct (“authentic”?) way.* It will be interesting to see how our hermeneutic practices change as a result of TEI’s near-ubiquitous use for manuscript encoding.


*Standard markup practices may also increasingly permit users to control how visible, or how invasive, markup appears to be on any digitized text. Amanda Gailey provides one example of how this might work when she mentions giving the viewer the option to turn dialect translations on (accessing searchable text) or off (accessing text-as-written) in the works of Joel Chandler Harris.

The two readings rise some crucial points concerning editing practices. Gailey’s piece, in particular, shows how TEI is not only a useful tool in practical terms but also (and, in some respects, more importantly) contributes to creating a “theory of the text” (132). In other words, TEI helps thinking about both formalistic issues concerning the text (e.g., what is a line or a stanza? Shall we ‘regularise’ the language of the ‘original’ text?) and substantial questions about the text, i.e. what a text is. Digitised texts or critical editions make us think the text as a fluid text. Moreover, digital critical editions exhort us to reconsider the ‘authority’ of the editor(/s), as they offer to the readers not only the opportunity to interpret the text but also to suggest or propose different editorial solutions.
Burnard’s piece focuses on the various nature of information a digital critical edition may provide through the markup system. In particular, the markup system takes into account compositional (i.e., rather formal) features of a text, contextual features, and interpretative features. The markup translates in a set of codes the human interpretation of the texts. Thereby, text encoding also helps thinking more theoretically about both formalistic and more content-related elements of the text.

Newer posts
Skip to toolbar