Ode to Nancy-Part 1

When I first came to Yale I wasn’t sure what “big” meant at Yale Library. I came from a public institution I thought was big–big campus, big buildings, lots of students, but I know now the library wasn’t big. It was more like medium, medium-rare. They didn’t have a large staff (funny though how we never thought we needed more people). The ILS was about 500,000 bib records-which is less than 5% of YUL’s. I thought YUL would be big enough for me to make more friends and colleagues, big enough to have almost any job you wanted. But then you talk to some folks here who tell you how small Yale is. That is a small school, small campus, small student body, and so on. I think it depends on the day you are having if you feel the bigness or smallness of Yale.

In the interest of making YUL feel smaller and more accessible to everyone, I give you Nancy Lyon. No one should ever work here, even for one week, and not have the opportunity to meet Nancy Lyon. I’ve been here over nine years and I do know Nancy through mutual friends, but it is only recently that I spent work time with Nancy. What I fear is that most folks passing time here might not ever cross paths with Nancy. She works in the basement of Sterling for starters. And folks can’t get there without another staff member escorting you down this tiny, narrow spiral staircase like you’d find in some old NYC apartment. Nancy has a real meat and potatoes job too, so you might not find her at high-profile committee meetings. But she is here, and she is a gem.

First thing I like is that when she sees me she says “Hey, Melis!” I can always tell people like me when they give me their own nickname for Melissa. (Never mind with the formality of Melissa, this is who she is to me, Melis, Missy, etc.). She also asks how you are doing…and waits for your answer while she looks you in the eye. When you reach Nancy’s desk, you immediately see all her favorite things. All tiny revelations about the kind of person she is and what is important to her. Nancy and I have the same print of a woman reading. It isn’t a famous print of a woman by Sargent or Manet, so I figure the odds of someone I know owning this print are slim. She has it in smaller size on her desk. Her desk also has a panoply of bits and bobs that could easily distract or entertain you for hours. But, let me tell you, Nancy is not distracted. Nancy is a woman with a mission and serious commitment to data accuracy. It might be like combining Yue Ji and Tom Bruno.

Nancy gave me a detailed overview of her work and set of responsibilities to help accession items into the Manuscripts and Archives collection, and how she prepares these items for accession to the Library Shelving Facility (LSF). Nancy also helps mange the physical space for MSSA, but more on that later. The process Nancy follows is very manual. There are some automation tools in place, but she still has a lot of high-touch steps to complete these procedures. As Nancy demonstrates these to me, she eyeballs me peripherally and says, “I see that look you have. I know you’re thinking to yourself this needs to all be automated.” I quickly apologize, as I must have some furrowed brows and a “What in the Worldcat?!” look on my face. The look isn’t about Nancy, rather my surprise at how critical work gets done, and engaging my own serious commitment of wanting to offer colleagues efficient workflows. But here’s the rub. Nancy is so good at her job that you couldn’t make her much more efficient. She has got it locked down. The way new parents might show you photos of their kids is how Nancy shows you the printouts of data mistakes she catches and records in case there are future questions. These are of course arranged by type of problem and per fiscal year. Of course, right?

Nancy has this narrative quality I love and often use to understand a program or a process. Some folks always want a short answer, and never more detail than they need at the minute. Not me and Nancy. I like the personalization, the A to Z-ness of it. I like that Nancy explains extracting data out of Archivists Toolkit into Voyager, like someone from Maine gives you directions to a diner and a recommendation for the coconut cream pie. The way Nancy explains it makes the pie taste better to me. I invested a little. And you should too. Because as the data is being identified for extraction out of Archivists Toolkit, it so happens that a little man from Switzerland chaperones it. He’s there, as real as anything, to make sure all data is present and accounted for, before making the next leg of the journey. I almost wish Library IT had added a little background alphorn into this part of the custom program so Nancy could tap her feet. Next comes what Nancy calls the “hold your breath moment.” This is when she waits for the reconciliation screen to show her that the right number of records were extracted and created. If it “balances” Nancy is happy.

So I am learning Nancy’s workflow and taking copious notes. It is true that one day all of her steps will be automated, but it might be after she retires. Why mess with perfection? Part of me doesn’t want her workflow automated, because then I know Nancy is down there doing it just fine. Another part of me does, because then Nancy could do other work that requires such precision and heart.

Working more efficiently with ArchivesSpace — some use cases

We’ve talked a bit already about our work with a vendor to make sure ArchivesSpace supports efficient workflows. After reading Mark’s blog post, I’m re-energized to think of the good this work will do — archivists in repositories will be able to do collection control projects in a far more robust and efficient way!

Right now, we are deeeeeeeeep into this work. During our planning call last night, one of the software developers asked for use cases for some of our requirements for selecting and changing information about containers — I was going to just make a screencast and notes to put in our project management software, but it occurred to me that this could be useful information to document more publicly, especially since folks are asking about how ArchivesSpace can help them do their work more efficiently.

So, here we go.

Here are some use cases.

Managing information about containers in bulk

An entire collection has been described, and I know for sure which containers I’ll be using. I’m now ready to slap barcodes on these boxes, tell the system what these barcodes are, tell the system what kinds of boxes I’m using (we call this a container profile), associate these boxes with a location, and associate an ILS holdings record identifier.

In order to do this, I need an easy way of saying “yeah, I’m looking for boxes that belong to this resource” or even “yeah, I’m looking for boxes that belong to this series.” In our environment (and I know this happens elsewhere too), it’s very common for each series to start over with box 1. So, when I’m at the point of putting a barcode on a box, I need to be sure I know which box 1 I’m sticking that barcode on.[1]
Holy Cow! Look at all of those box ones!
Holy Cow! Look at all of those box ones!

In Archivists’ Toolkit, if you want to do this work, you can only scope it to a single resource record. We know that it might be desirable to do bulk actions across several collections, so we want the option to scope this work to a single resource, but we don’t want to be stuck with it.

And then, in the current Archivists’ Toolkit plug-in, you would pick the containers you want and update various fields. We’ve been thinking slightly differently about which fields we would want to update and how, but suffice it to say that we would want to pick boxes that share an attribute (like location, ILS holdings ID, container type, whatever), and then be able to enter data about that attribute and expect the changes to propagate across containers. [2]

This is really exciting because currently in ArchivesSpace, every time that you associate a container with a component (like “Correspondence from Anne Shirley to Diana Barry 1877”), you would have to enter the barcode anew. This obviously isn’t very efficient, and it can result in a whole lot of errors. In the workflow we’re proposing, you would be able to know that the box one for each of those components is the SAME box one, and you’d only have to enter the barcode once.

Managing the relationship between containers and locations in bulk

Here’s another use case: Maybe I’m doing a shelf-read. I come to a location in my repository that’s described in a location record. But maybe the location associated with those containers in the database isn’t correct! I want a quick and easy way of selecting the containers in my repository that I see in that location and associating those with the appropriate location record. This is currently impossible to do in ArchivesSpace — in fact, it does a really screwy thing where if you look up a location (like “Library Building, A Vault, Row 1, Bay 1, Shelf 1” ) it gives you a list of all the archival objects there (in other words, intellectual description), not the containers! You don’t want to go into every intellectual description and update the repeating box information. This work would make it possible to say “Boxes 3, 4, 5, 7, and 10 from MSS.111, series 4 all belong in this location” and for the system to know that.

Or maybe that location is a palette or a set of shelves that are designated as needing to go off-site. In order to document the fact that they are, indeed, going off-site, I want a quick and easy way to pick those containers and update the ILS holdings ID. If the palette or location is itself barcoded, that makes it even easier to disambiguate what I’m trying to do! [3]

I hope you’ve enjoyed this journey through how we want ArchivesSpace to work. Obviously, software development is an ever-changing endeavour, ruled by compromise and the art of the possible. So don’t take these use cases as promises. But it should give you a good sense of what our priorities and values are as we approach this project.

[1] Our user story for this is: “BULK SELECTION: As an archivist, I want an easy and fast way to choose all or some (contiguous and non-contiguous) records in a result set to further act upon.”

[2] Our user story for this is: “BULK SELECTION: As an archivist, I would like to define a set of container records that are associated with descendant archival_objects of an archival_object with a set Component Unique Identifier in order to perform bulk operations on that set.” which is obviously a perfect storm of software jargon and archives jargon, but basically we’re saying that we need to know which series this belongs to. Since series information is stored in the component unique identifier, that’s what we want to see.

[3] Our user story for this is: “BULK SELECTION: As an archivist, I would like to define a set of container records associated with a location in order to perform bulk operations on that set. When defining a set of container records by location, I would like the option to choose a location by its barcode.”

Cooperation, Co-Everything, and One (of many) Excellent Question(s)

Hi, everybody.  Long-time reader, first-time poster.  I’m Mark Custer, and I’ve been working as an Archivist and Metadata Coordinator at the Beinecke Rare Book & Manuscript Library for just over two years now.  This past year, most of my job duties have centered on ArchivesSpace. In addition to co-chairing Yale University’s ArchivesSpace Committee with Mary Caldera, I co-taught two ArchivesSpace workshops last year that were offered by Lyrasis, a membership community of information professionals, which was formed by the combination of two other regional consortiums.  In October, I helped out at a Boston workshop as a trainer in training; and in December, I co-taught a workshop that was co-sponsored by the Rochester Regional Library Council and the University of Rochester.  Looking back on the year 2014, then, what stands out most to me in my professional life is the increasing importance and necessity of partnerships. The Latin prefix co- was everywhere, and I don’t think that this notion of co-everything will be taking a backseat anytime soon.

These partnerships are precisely the sorts of things that have me so excited about ArchivesSpace.  To me, the most important thing that is emerging from the ArchivesSpace project so far is the community, not the system — don’t get me wrong, though, I’m extremely impressed by how the software has been able to combine the features and functions of Archivists’ Toolkit and Archon into a single project in such a short amount of time!  I’d even venture to say that the community is not only influencing the development of the software by making itself known through its individual and institutional voices, but that the community is also showing signs that it intends to nourish and nurture that software with a collective voice.  And, full disclosure, I’m also currently serving on the ArchivesSpace Users Advisory Council, so if you don’t agree with that statement, please let me know.

Of course, there’s still a long way for us to go.  For instance, at the end of the two-day ArchivesSpace workshop in Rochester, one of the participants asked an excellent question, which I’ll paraphrase here:

“How can I adopt more efficient workflows using ArchivesSpace?”

Each of the instructors, myself included, as well as a few of the other participants, provided a few suggestions to this important question.  What struck me by those answers, though, is that none of the suggestions were ArchivesSpace specific just yet.  That shouldn’t actually surprise me, given the relative newness of ArchivesSpace – both the software and the community – but it does remind me that we have a lot of work to do.  But it’s precisely this sort of work that I’d really like to see the archival community communicating more about in 2015.

As Maureen has already talked about in another blog post (https://campuspress.yale.edu/yalearchivesspace/2014/11/20/managing-content-managing-containers-managing-access/), one of the ways that we’d like to enable more efficient workflows in ArchivesSpace is to enhance its container management features, ideally by really letting those functions run in the background so that archivists can focus on archival description.  A few other (collective) workflows that I hope that ArchivesSpace will make more efficient include:

  • Assessing archival collections
  • Printing box and folder labels
  • Publishing finding aids to external aggregators, such as ArchiveGrid, automatically
  • Integrating with other specialized systems, such as Aeon, Archivematica (check out what the Rockefeller Archive Center has done with Archivematica and the AT in this blog post http://rockarch.org/programs/digital/bitsandbytes/?p=1172, for example!), Google Analytics, SNAC, Wikipedia, etcetera

I’d love to hear how others would like to create efficiencies using ArchivesSpace, so please leave comments here or send me an email.  I think that we need to strive for cooperative systems that promote cooperative data, including web-based documents, and I really do think that the ArchivesSpace community is poised to achieve those goals.

Building a Community Through ArchivesSpace Implementation

So far you have probably seen posts by my colleagues discussing the efforts to make ArchivesSpace work in our complex multi-repository environment at Yale. To date, we have evaluated the application in its present form, hired consultants to develop additional functionality, and are currently engaged in extensive testing. However, in addition to trying to effectively implement ArchivesSpace, we have also needed to consider how we might work together more effectively.

There are twelve discrete repositories at Yale that will be implementing ArchivesSpace. Currently many of these repositories work in their own instance of Archivists’ Toolkit or outside of an archives management system, and the archivists at each repository have developed some individual repository-specific methods for managing containers and describing materials. While we need to ensure that ArchivesSpace will work for us, in committing to a single, university-wide version of ArchivesSpace, the implementation of ArchivesSpace is providing us with a unique opportunity to further develop cooperation amongst the many repositories at Yale.

Much of our work to this end has been straightforward. For example, in the summer of 2014, our Committee standardized the controlled vocabulary lists in ArchivesSpace. However, some of our work has been more complex and far-reaching. In the fall of 2014, we interviewed archivists at all twelve repositories about their practices, including their approaches to managing containers and locations as well as their description of archival material, particularly non-paper formats. While these interviews began with the explicit goal of gaining a better understanding of procedures at Yale so that our Committee could make sure that our implementation of ArchivesSpace met everyone’s needs, during our discussions regarding description it became apparent that current practices are widely divergent among campus repositories, requiring further cross-repository discussion regarding the description of born-digital materials, digital surrogates, and A/V materials.

We have developed a task force consisting of archivists from multiple units on campus in order to determine basic guidelines for description of these types of materials. This task force will share its proposed description guidelines with all stakeholders at the University, responding to feedback and reaching consensus, with the goal of configuring Yale’s installation of ArchivesSpace to accommodate these guidelines.

We look forward to updating you on our progress and sharing our guidelines once they are complete.

Happy New Year!

Making ArchivesSpace Accessioning Work for Us (and You)

At Yale generally, and especially in my repository, the Beinecke Rare Beinecke Rare Book and Manuscript Library, we take accessioning very seriously. To give you a sense of the investment we make, within the Beinecke’s Manuscript Unit, I lead a staff of four (1 professional [me] and three paraprofessionals) who work exclusively on manuscript accessioning. That doesn’t include our Printed Acquisitions department, which handles all of the published material that we acquire. It is a high volume of material, and we capture a lot of information at the point of accessioning, information that is relied upon and regularly queried by both staff and researchers.

As we began the process of implementing ArchivesSpace at Yale, the accessions module was the first that we reviewed in detail. This was partly because of its importance, but also because it presented an opportunity for me and my colleagues at the Beinecke. In 2012 the Beinecke Manuscript Unit implemented the Archivists’ Toolkit for manuscript accessioning, breaking from the accessioning database that the library had used since 1985 and which is still used by the Printed Acquisitions department today. Although using the AT for manuscript accessioning has been a great improvement for my operation, introducing a second accessioning database has complicated a previously simple situation. [The reasons we didn’t adopt the AT for printed accessioning are beyond the scope of this post, but I’m happy to provide further background to anyone interested.] We hoped that ArchivesSpace would allow us to reunite our accessioning databases and serve as a robust platform for further improvements in our accessioning operation.

In our analysis of ArchivesSpace accessions last winter, we identified a few crucial areas that required further development before we could consider implementing it as a single accessioning database for the Beinecke Library.

  • We needed finer control over who could edit accession records in order to better secure our accessions data.
  • We needed advanced search functionality in the staff interface (not just the public interface) in order to support sophisticated staff use of accession data.
  • We needed to be able to generate a variety of reports specific to accessioning in order to support a wide range of staff use cases.
  • We needed to be able to import MARCXML records as accessions (not just as resources) in order to improve the efficiency of our printed accessioning program.
  • We needed to be able to import records directly from OCLC WorldCat via their API in order to improve the efficiency of our printed accessioning program.
  • We needed to be able to spawn copies of existing accessions in order to improve the efficiency of our printed accessioning program .
  • We needed to be able to create accession-to-accession relationships in order to reflect the sibling and part relationships that many of our accessions have to each other.
  • We needed to implement a strict scheme for system-generated accession identifiers in order to ensure unique, meaningful identifiers across all repositories.
  • We needed to be able to capture complex information documenting payments in order to fully record the financial transactions that generate a majority of our accessions.
  • We needed to be able to record “material type” codes in order to migrate data in our current systems and support specific search strategies.

To achieve these goals we contracted with Hudson Molonglo, the firm that built ArchivesSpace, both to make changes to the core application and to develop a series of plugins. [We chose not to undertake work related to reports because of ongoing development by Lyrasis.] We worked with them over several months in the spring and are currently in the midst of further accessioning-related development (simultaneous with container-related development work).

Over the summer we issued pull requests to merge some of our development work into the core ArchivesSpace application. After review by the ArchivesSpace Users Advisory and Technical Advisory Councils, that code was incorporated into the ArchivesSpace 1.1.0 release. The application now includes the ability to set edit permissions separately for each of the major record types (accessions, resources, digital objects), advanced search in the staff interface, the ability to import MARCXML as an accession record, the ability to spawn copies of accessions, and the ability to create accession-to-accession relationships.

We also completed development work on several plugins, all of which are freely available on our committee’s GitHub repo.

  • aspace_yale_accessions: This plugin modifies the four-part accession identifier. The first part is restricted to fiscal year (based on the accession date), the second part is a department code selected from an enumerated list, and the third is a system-generated sequential four-digit number for each department and fiscal year. The fourth segment is suppressed.
  • extended_advanced_search: This plugin extends the fields available in the staff advanced search to include fields not included in the core application’s advanced search, including some user defined fields and fields created by our material_types and payment_module plugins.
  • material_types: This plugin allows us to add Material Type subrecords to our accession records. A Material Type subrecord consists of a set of Boolean check boxes that indicate the presence of certain formats (books, manuscripts, art, maps, games, etc.).
  • payments_module: This plugin allows us to add payment information to our accession records. A given accession record can have one Payment Summary subrecord, which can then have zero or more associated Payment records. Together these capture the financial details of our purchased accessions, including price, invoice number, fund code, currency, etc.

During our current round of development we are working on two additional plugins. These are necessary in order for ArchivesSpace to be a viable replacement to our existing database for printed accessions and should be ready early in the new year.

  • aspace_oclc: This plugin will allow us to import bibliographic records from WorldCat as accessions in ArchivesSpace. This is accomplished via the WorldCat Metadata API.
  • yale_marcxml2accession_extras: This plugin will modify the generic MARCXML > accession import mapping created for the core application, extending it to accommodate our local needs.

We have additional development goals related to accessions in ArchivesSpace, but our work to date addresses most of the requirements we identified as critical to resolve before migrating our production accessioning databases to ArchivesSpace. I’m happy to discuss any of this work in further detail, either in the comments or elsewhere.

Managing Content, Managing Containers, Managing Access

In my last blog post, I talked a bit about why ArchivesSpace is so central and essential to all of the work that we do. And part of the reason why we’re being so careful with our migration work is not just because our data is important, but also because there’s a LOT of it. Just at Manuscripts and Archives (the department in which I work at Yale, which is one of many archival repositories on campus), we have more than 122,000 distinct containers in Archivists’ Toolkit.

With this scale of materials, we need efficient, reliable ways of keeping physical control of the materials in our care. After all, if a patron decides that she wants to see material in one specific container among the more than 122 thousand, we have to know where it is, what it is, and what its larger context is in our collections.

Many years ago, when Manuscripts and Archives adopted Archivists’ Toolkit (AT) as our archival management system, we developed a set of ancillary plug-ins to help with container management. Many of these plug-ins became widely adopted in the greater archival community. I’d encourage anyone interested in this functionality to read this blog post, written at the time of development, as well as other posts on the AT@Yale blog  (some things about the plug-in look marginally different today, but the functions are more or less the same).

In short, our AT plug-in did two major things.

  1. It lets us manage the duplication of information between AT and our ILS
    At Yale, we create a finding aid and a MARC-encoded record for each collection*. In the ILS, we also create “item records” for each container in our collection. That container has an associated barcode, information about container type, and information about restrictions associated with that container.
    All of this information needs to be exactly the same across both systems, and should be created in Archivists’ Toolkit and serialized elsewhere. Part of our development work was simply to add fields so that we could keep track of the record identifier in our ILS that corresponds to the information in AT.
  2. It let us assign information that was pertinent to a single container all at once (and just once).
    In Archivists’ Toolkit (and in ArchivesSpace too, currently), the container is not modeled. By this I mean that if, within a collection, you assign box 8 to one component and also box 8 to another component, the database has not declared in a rigorous way that the value of “8” refers to the same thing. Adding the same barcode (or any other information about a container) to every component in box 8 introduces huge opportunities for user error. Our plug-in for Archivists’ Toolkit did some smart filtering to create a group of components that have been assigned box 8 (they’re all in the same collection, and in the same series too, since some repositories re-number boxes starting with 1 at each series), and then created an interface to assign information about that container just once. Then, in the background, the plug-in duplicated that information for each component that was called box 6.
    This wasn’t just about assigning barcodes and Voyager holdings IDs and BIBIDs — it also let us assign a container to a location in an easy, efficient way. But you’ll notice in my description that we haven’t really solved the problem of the database not knowing that those box 8’s are all the same thing. Instead, our program just went with the same model and did a LOT of data duplication (which you database nerds out there know is a no-no).

Unfortunately, ArchivesSpace doesn’t yet model containers, and as it is now, it’s not easy to declare facts about a container (like its barcode or its location) just once. Yale has contracted with Hudson Molonglo to take on this work. Anyone interested in learning more is welcome to view our scope of work with them, available here — the work I’m describing is task 6 in this document, and I look forward to describing the other work they will be doing in subsequent blog posts. We’ve also parsed out each of the minute actions that this function should be able to do as a set of user stories, available here. Please keep in mind that we are currently in development and some of these functions may change.

Once this work is completed, we plan to make the code available freely and with an open source license, and we also plan to make the functions available for any repository that would like to use them. Please don’t hesitate to contact our committee of you have questions about our work.


* We (usually/often/sometimes depending on the repository) create an EAD-encoded finding aid for a collection at many levels of description, and also create a collection-level MARC-encoded record in a Voyager environment. This process currently involves a lot of copying and pasting, and records can sometimes get out of sync — we know that this is an issue that is pretty common in libraries, and we’re currently thinking of ways to synchronize that data.

Making Our Tools Work for Us

Metadata creation is the most expensive thing we do.

I hear myself saying this a lot lately, mostly because it’s true. In the special collections world, everything we have is unique or very rare. And since we’re in an environment where patrons who want to use our materials can’t just browse our shelves (and since the idea of making meaning out of stuff on shelves is ludicrous!), we have to tell them what we have by creating metadata.

Creating metadata for archival objects is different than creating it for a book — a book tells you more about itself. From a book’s title page, one can discern its title, its author, who published it. Often, an author will even write an abstract of what happens in the book and someone at the Library of Congress will have done work (what we call subject analysis) to determine its aboutness.

In archives, none of that intellectual pre-processing has been done for us. Someone doing archival description has to answer a set of difficult questions in order to create high-quality description — who made this? Why was it created? What purpose did it serve for its creator? What evidence does this currently serve about what happened in the past? And the same questions have to be addressed at multiple levels — what is the meaning behind this entire collection? What does it tell us about the creator’s impact on the world? What is the meaning behind a single file collected by the creator? What purpose did it serve in her life?

Thus, the metadata we create for our materials is also unique, rare, intellectually-intensive, and essential to maintain.

Here, as part of a planning session, Mary and Melissa are talking through which tasks need to be performed in sequence.
Here, as part of a planning session, Mary and Melissa are talking through which tasks need to be performed in sequence.

Currently, we use a tool called Archivists’ Toolkit to maintain information about our holdings, and this blog is about our process of migrating to a different tool, called ArchivesSpace. Because, like I say, the data in ArchivesSpace is expensive and unique, we’ve taken a very deliberative and careful approach to planning for migration.

We’re lucky to have a strong group, with diverse backgrounds. Mary Caldera and Mark Custer are our co-chairs, and have strong management and metadata expertise between them. Melissa Wisner, our representative from Library IT, has a background in project management and business analysis. She was able to walk us through our massive project planning, and helped us understand and make sense of the many layers of dependencies and collaborations that will all have to be executed properly in order for this project to be successful. Others on the group include experts in various archival functions and standards. And beyond this, we have established a liaison system between ourselves on the committee and archivists from other repositories at Yale, so we can make sure that the wisdom of our wider community is being harnessed and the transition to this new system is successful for all of us.

Anyone interested in viewing our project timeline is welcome to see it here. We know that other repositories are also involved in transition to ArchivesSpace, and we would be happy to answer questions you may have about our particular implementation plan.

Yale Library IT Supports ArchivesSpace

The implementation of ArchivesSpace is a collaborative effort among archives and special collection units and Library IT. This project is exciting because ArchivesSpace is to special collections as Voyager is to YUL’s general collections. Library IT has a long-standing relationship with Voyager as an enterprise level application, providing server support, coordinating upgrades, managing custom development, and encouraging an ideology of systems and data integration. The implementation work of ArchivesSpace (AS) will run a similar gamut of standard IT support, including supporting three instances of AS (dev, test, production), configuring an LDAP server and Active Directory group for authentication, assisting with data analysis and export, participating in a series of sprints with a third party vendor to develop custom plug-ins for AS, and managing some in-house development to integrate AS with Voyager. While these tasks are typical of an IT project, success depends on the collaborative relationship IT develops with archival and special collections staff. Library IT is investing time to learn the current and expected workflow supported by AS, and why the tool is critical to the daily operations of archivists and special collections professionals. IT is also learning the lexicon employed by special collections (and one day I hope to fully understand what a container is), and what archivists geek out on. The technical and social elements of new systems implementations comprise the ultimate success.


Welcome to ArchivesSpace@Yale, where members of Yale’s implementation team will share our goals, milestones, highs and lows. Our primary goals are to

  • promote use of one archival management system across Yale;
  • migrate and merge three Archivists’ Toolkit instances into one ArchivesSpace instance; and
  • contribute to the ArchivesSpace community.

To learn more about Yale’s ArchivesSpace Committee and implementation visit our Web site! We will also share our scripts and plug-ins via GitHub, but that is a few months away still.