I thoroughly enjoyed reading these two articles. They both clearly articulate the stakes of the conversations surrounding text editing and encoding in ways accessible to specialists and new scholars. Their clarity reveals that the bar for entry into the digital humanities is not as high as is often assumed, but it also allows them to drive home the point that text editing and encoding require the same amount of rigor associated with other forms of scholarship. I found the possibility of layering different scholarly interpretations of the same text to be imminently intriguing, particularly as a means of making academic discourse more transparent for non-specialists.
The readings also encouraged me to think more about decontextualization – not only of the text as a whole vis-à-vis its surroundings and textual contemporaries, but within texts themselves. How can editing and encoding be used to minimize the decontextualization which can often occur in digital searches?
Something that feels undealt with, however, is the question of language itself. What are the difficulties posed by non-Western languages? How might this complicate the choices inherent in the process of editing and encoding?