Data for Curators: OPenn and Bibliotheca Philadelphiensis as Use Cases

Following are my remarks from the Collections as Data National Forum 2 event held at the University of New Mexico, Las Vegas, on May 7 2018. Collections as Data is an Institute of Museum and Library Services supported effort that aims to foster a strategic approach to developing, describing, providing access to, and encouraging reuse of collections that support computationally-driven research and teaching in areas including but not limited to Digital Humanities, Public History, Digital History, data driven Journalism, Digital Social Science, and Digital Art History. The event was organized by Thomas Padilla, and I thank him for inviting me. It was a great event and I was honored to participate.

Today I’m going to be talking about curators as an audience for collections as data, using two projects from the University of Pennsylvania’s Kislak Center for Special Collections, Rare Books and Manuscripts as use cases. I am a curator in the Kislak Center, and most of my time I work on projects under the aegis of the Schoenberg Institute for Manuscript Studies, which is a unit under the Kislak Center. SIMS is a kind of research and development group (our director likes to refer to it as a think tank) that focuses on manuscript studies writ large, mostly but by no means only focused on medieval manuscripts from Europe, and that specializes in examining the relationship between manuscripts as physical objects and their digitized counterparts.

For this session, we’ve been asked to react to this assertion from the Collections as Data Santa Barbara Statement: Collections as data designed for everyone serve no one, and to discuss the audiences that our collections as data are built for.

I’ll start with OPenn, which launched in May 2015 as an open access collection of Penn’s digitized manuscript material. Penn started digitizing its manuscripts in the mid 1990s, but they had been virtually locked in a black box system. To create OPenn we cracked opened the box, generated new derivative images from the master TIFF files, generated TEI/XML manuscript description files using the data from our catalog and supporting databases, and put it all in a fully public file server. The collection navigation is provided by HTML pages – one that lists all the repositories, pages listing the manuscripts in each repository, and finally HTML pages for each manuscript presenting the catalog data and links to the image files. At the time OPenn launched, there was no search facility, although one has recently been added.

OPenn’s developer, Doug Emery, describes the access that OPenn provides as friction-free access, referring both to the licensing (the image files are in the public domain, the metadata is licensed cc:by) and to the technical access. There’s no login and no API. You can navigate to the site in a browser and download images, or you can point wget at the server and bulk download entire manuscripts.

When we were designing OPenn, we weren’t thinking that much about the audience, honestly. We were thinking about pushing the envelope with fully available, openly licensed, high resolution, robustly described and well-organized digitized medieval manuscripts. We did imagine who might use our collections, and how, and you can read the statement from our readme here on the screen.

But I can’t say that we built the system to serve any audience in particular. We did build the system in a way that we thought would be generally useful and usable. But it became clear after OPenn launched that our lack of an audience made it difficult for us to “sell” OPenn to any group of people. Medievalists, faculty and students, who might want to use the material, were put off by the relatively high technical learning curve, the simple interface (lacking the expected page-turning view) and by the lack of search (we do have a Google Search now, but it was only added to the site in the past month). Data analysts who might want to visualize the collection-wide data were put off by the formatting of each manuscript having its own TEI file. Indeed data designed for everyone does seem to serve no one.

But wait! Don’t lose hope! An accidental audience did present itself. In the months and into the first year after OPenn launched, it was slowly used as a source for projects. The Philadelphia Area Consortium of Special Collections Libraries, PACSCL, undertook a collaborative project whereby each member institution digitized five diaries from their collections, which were put on OPenn, the PACSCL Diaries Project.

When the project went live, the folks at PACSCL wanted a user-friendly way to make the diaries available, so I generated page-turning interfaces using the Internet Archive Bookreader  that pulled in metadata from the TEI files and that point to the image files served on OPenn.

At some point I decided that I wanted to get a better sense of one of our manuscript collections, the Lawrence J. Schoenberg Collection, so again I wrote a script to generate a CSV file pulling from all the collection’s TEI files. Jessie Dummer, the Kislak Center’s Digitization Project Coordinator, cleaned up the data in the CSV, and we were able to load the CSV into Palladio for visualization and analysis (on github)

I combined the links to images on OPenn with data gathered through another SIMS project, VisColl (which I’ll describe in a bit more detail later) to generate a visualization of the gathering structure of manuscripts with the bifolia, or sheets, laid alongside. And last but not least, I experimented with setting up a IIIF image server that could serve the images from OPenn as IIIF-compatible images (this is a screenshot of the github site where I published IIIF manifests I generated as part of that project, but they don’t work because the server no longer exists).

The accidental audience? It was me.

I don’t remember thinking about or discussing with the rest of the team as we planned for OPenn how I might use it as part of my regular work. I was familiar with the concept of an open collection of metadata and image files online; OPenn was based on The Digital Walters, which both the Director of the Kislak Center Will Noel and Doug Emery had built when they were employed at the Walters Art Museum in Baltimore, and I had been playing with that data for a year before I was even hired at Penn. I must have know that I would use it, I just didn’t realize how much I would use it, or how having it available to me would change the way I thought about my work, and the way I worked with the collections. The things that made it difficult for other people to use OPenn – the lack of a search facility, the dependence on XML – didn’t affect me negatively. I already knew the collection, so a search wasn’t necessary; at the time OPenn launched I had been working with XML technologies for 10 years or so, so I was very comfortable with it.

Having OPenn as a source for data gives me so much in my curatorial role. I have the flexibility to build the interfaces I want using tools I can understand, and flexibility, easy access, familiar formats

At the very end of 2015, several months after OPenn was launched, we, along with PACSCL, Lehigh University, and the Free Library of Philadelphia, were awarded a grant from the Council on Library and Information Resources under the “Digitizing Hidden Collections” program to digitize western Medieval manuscripts in 15 Philadelphia area libraries. We call the project Bibliotheca Philadelphiensis, the “library of Philadelphia”, or BiblioPhilly for short. Working from my experience working with data on OPenn, during the six-month lead up to cataloging and digitization I was able to build the requirements for the BiblioPhilly metadata in a way to guarantee that the resulting data would be useful to me and to the curators and librarians at the other institutions. Some of the things we implemented include a closed list of keywords (based on the keyword list developed for the Digital Walters), in contrast with the Library of Congress subject headings in OPenn, and four different date fields (date range start, date range end, single date, and narrative date) with strict instructions for each (except for narrative date) to ensure that the dates will be computer readable.

We have also integrated data from VisColl into BiblioPhilly, both into the data itself, and in combination with the data in the interfaces. VisColl, as I mentioned before, is a system to model and visualize the quire structure of manuscripts. (A manuscript’s quire structure is called its collation, hence the name VisColl – visualizing collation) VisColl models are XML files that describe each leaf in a manuscript and how those leaves relate to each other (if they are in the same quire, or if they are conjoined, if a leaf is missing or has been added, etc.). From a model we’re able to generate a concise description of a manuscripts’ construction, in a format referred to as a collation formula, and this formula is included in the manuscript’s cataloging and becomes part of the TEI manuscript description. However we’re also able to combine the information from the collation model with the links to the image files on OPenn to generate views of a collation diagram alongside the sheets that make up the quires. 

For BiblioPhilly, because of the experimentation we did with Penn manuscripts on OPenn, we’ve been able to make the digitized BiblioPhilly manuscripts available online in ways that are more user-friendly to non-technical users than OPenn is, even before we have an “official” project interface. We did this by building an In Progress Viewer relatively early on. The aim of the In Progress viewer was 1) to provide technically simple, user-friendly ways to search, browse, and view the manuscripts, and 2) to make available information both about the manuscripts that were online, and about the manuscripts that had yet to go online (including the date they were photographed, so users can track manuscripts of particular interest through the process).

The first In Progress Viewer was built in the Library of Congress’s Viewshare,  which provided federated browsing for all the fields in our records, along with a timeline and simple mapping facility. Unfortunately the Library of Congress is no longer supporting Viewshare, and when it went offline on March 20 we moved to an Omeka platform, which is more attractive but lacks the federated searching that made Viewshare so compelling. From Omeka (and Viewshare before it) we link to the manuscript data on OPenn, to Internet Archive BookReader page-turners, and to VisColl collation views. Both the BookReaders and VisColl views are generated locally from scripts and hosted on a Digital Ocean droplet. This is a temporary system, and is not built to last beyond the end of the project. It will be replaced by an official, longer-lived interface.

We’re also able to leverage the OPenn design of BiblioPhilly and VisColl for this “official” interface, which is currently under development with Byte Studios of Milwaukee, Wisconson. While our In Progress Viewer has both page-turning facility and collation views, those elements are separate and are not designed to interact. The interface that we are designing with Byte Studios incorporates the collation data with the page-turning and will allow a user to switch seamlessly between page openings and full sheets.

It’s exciting that we’ve been able to leverage what was essentially an audience-less platform into something that can so well serve its curator, but there is a question that this approachpushes wide open: What does it mean to be a curator? With a background in digital humanities focused on the development of editions of medieval manuscripts I was basically the perfect curator for OPenn. But that was a happy accident. Most special collections curators don’t have my background or my technical training, so access to something like OPenn wouldn’t help them, and I’m very hesitant to suggest that every curator be trained in programming. I do think that every special collections department should have some in-house digital expertise, and maybe that’s the direction to go. Anyway, I’m very happy being in my current situation and I only wish we’d considered the curator as an audience for OPenn earlier in the process.

Ceci n’est pas un manuscrit: Summary of Mellon Seminar, February 19th 2018

This post is a summary of a Mellon Seminar I presented at the Price Lab for Digital Humanities at the University of Pennsylvania on February 19th, 2018. I will be presenting an expanded version of this talk at the Rare Book School in Philadelphia, PA, on June 12th, 2018

In my talk for the Mellon Seminar I presented on three of my current projects, talked about what we gain and lose through digitization, and made a valiant attempt to relate my talk to the theme of the seminars for this semester, which is music and sound. (The page for the Mellon Seminars is here, although it only shows upcoming seminars.) I’m not sure how well that went, but I tried!

I started my talk by pointing out that medieval manuscripts are physical objects – sometimes very large objects! They have weight and size and heft, and unlike static objects like sculptures, manuscripts move. They need to move in order for us to read them. But digitized manuscripts – the ones you find for example in Penn in Hand, the page-turning interface for Penn’s digitized manuscript collection – don’t really move. Sure, we have an interface that gives the impression of turning the pages of the book, but those images are flat, static files that are just the latest version in a long history of facsimile copies of manuscripts. A page-turning interface for medieval manuscripts is the equivalent of taking a book, cutting the pages out, and then pasting those pages into a photo album. You can read the pages but you lose the sense of the book as a physical object.

It sounds like I’m complaining, but I’m really not. I like that digital photographs of manuscripts are readily available and relatively standard, but I do think it’s vitally important that people using them are aware of how they’re different from the “real” manuscript. So in my talk I spent some time deconstructing a screenshot from a manuscript in Penn in Hand (see above). It presents itself as a manuscript opening (that is, two facing pages), but it should be immediately apparent that this is a fake. This isn’t the opening in the book, it’s two photos placed side-by-side to give the impression of the opening of the book. There is a dark line down the center of the window which clearly delineates the photo on the left and the one on the right. You can see two gutters – the book only has one, of course, but each photo includes it – and you can also see a bit of the text on the facing page in each photo. From the way the text is angled you can tell that this book was not laid flat when it was photographed – it was held at or near a 90 degree angle (and here’s another lie – the impression that the page-turning interface gives us is that of a book laid flat. Very few manuscripts lay flat. So many lies!).

We can see in the left-hand photo the line of the edge of the glass, to the right of the gutter and just to the left of the black line. In our digitization lab we use a table with a spring-loaded top and a glass plate that lays down on the page to hold it flat. (You can see a two-part demo of the table on Facebook, Part One and Part Two) This means the photographer will always know where to focus the camera (that is, at the level of the glass plate), and as each page of the book is turned the pages are the same distance from the camera (hence the spring under the table top). I think it’s also important to know that when you’re looking at an opening in a digital manuscript, the two photos in that composite view were not taken one after the other; they were possibly taken hours apart. In SCETI, the digitization lab in the Penn Libraries, all the rectos (that is, the front of the page) are taken at one time, and then the versos (the back of the page) are taken, and then the system interleaves them. (For an excellent description of digital photography of books and issues around it please see Dr. Sarah Werner’s Pforzheimer Lecture at the Harry Ransom Center on Early Digital Facsimiles)

I moved from talking about how digital images served through page-turning interfaces provide one kind of mediated (~fake~) view of manuscripts to one of my ongoing projects that provides another kind of mediated (also fake?) view of manuscripts: video. I could talk and write for a long time about manuscript videos, and I am trying to summarize my talk and not present it in full, so I’ll just say that one advantage that videos have over digitized images is that they do give an impression of the “real” manuscript: the size of them, the way they move (Is it stiff? How far can it open? Is the binding loose or tight?), and – relevant to the Seminar theme! – how they sound. I didn’t really think about it when I started making the videos four years ago, but if you listen carefully in any of the videos you can hear the pages (and in some cases the bindings), and if you listen to several of them you can really tell the difference between how different types of parchment and paper sound. Our complete YouTube playlist of video orientations is here, but I’ll embed one of my favorites here. This is LJS 280, a 13th century copy of Decretales Gregorii IX in a 15th century chain binding that makes a lot of noise.

I don’t want to imply that videos are better than digital images – they just tell us something that digital images can’t. And digital images are useful in ways that videos aren’t. For one thing, if you’re watching a video you can see the way the book moves, but I’m the one moving it. It’s still a mediated experience, it’s just mediated in a different way. You can see how it moved at a specific time, in a specific situation, with a specific person. If you want to see folio 45v, you’re out of luck, because I didn’t turn to that page (and even if I had, the video resolution might not be high enough for you to read it; the video isn’t for reading – that’s why we have the digital images).

There’s another problem with videos.

In four years of the video orientation program, we have 74 videos online. We could have more if we made it a higher priority (and arguably we should), but each one takes time: for research, to set up and take down equipment, for the recording (sometimes multiple takes), and then for the processing. The videos are also part of the official record of the manuscript (we load them into the library’s institutional repository and link them to records in the library’s catalog) and doing that means additional work.

At this point I left videos behind and went back to digital images, but a specific project: Bibliotheca Philadelphiensis, which we call BiblioPhilly. BiblioPhilly is a major collaborative project to digitize medieval manuscripts from institutions across Philadelphia, organized by the Philadelphia Area Consortium of Special Collections Libraries (PACSCL) and funded by the Council on Library and Information Resources (CLIR). We’re just entering year three of a three-year grant, and when we’re done we’ll have 476 manuscripts online (we have around 130 online now). If you’re interested in checking out the manuscripts that are online, and to see what’s coming, you can visit our search and browse site here.

The relevance of BiblioPhilly in my talk is that we’re being experimental with the kind of data we’re creating in the cataloging work, and with how we use that data to provide new and different manuscript views.

Manuscript catalogers traditionally examine and describe the physical structure of the codex. Codex manuscripts start as sheets of parchment or paper, which are stacked and folded to create booklets called quires. Quires are then gathered together and sewn together to make a text block, then that is bound to make the codex. So describing the physical structure means answering a few questions: How many quires? How many leaves in each quire? Are there leaves that are missing? Are there leaves that are singletons (i.e., were never part of a sheet)? When a cataloger has answered these questions they traditionally describe the structure using a collation formula. The formula will list the quires, number of leaves in a quire, and any variations. For example, a manuscript with 10 quires, all of which have eight leaves except for quire six which has four, and there are some missing leaves, might have a formula like this:

1-4(8), 5(8, -4,5), 6(4), 7-10(8)

(Quires 1 through 4 have eight leaves, quire 5 had eight leaves but four and five are now missing, quire 6 has four leaves, and quires 7-10 have eight leaves)

The formula is standardized for printed books, but not for manuscripts.

Using tools developed through the research project VisColl, which is designing a data model and system for describing and visualizing the physical construction of manuscripts, we’re building models for the manuscripts as part of the BiblioPhilly cataloging process, and then using those models to generate the formulas that go into our records. This itself is good, but once we have models we can use them to visualize the manuscripts in other ways too. So if you go to the BiblioPhilly search and browse site and peek into the records, you’ll find that some of them include links to a “Collation View”

Following that link will take you to a page where you can see diagrams showing each quire, and image files organized to show how the leaves are physically connected through the quire (that is, the sheets that were originally bound together to form the quire).

Like the page-turning interface, this is giving us a false impression of what it would be like to deconstruct the manuscript and view it in a different way, but like the video is it also giving us a view of the manuscript that is based in some way on its physicality.

And this is where my talk ended. We had a really excellent question and answer session, which included a question about why I don’t wear gloves in the videos (my favorite question, which I answer here with a link to this blog post at the British Library) but also a lot of great discussion about why we digitize, and how, and why it matters, and how we can do it best.

Thanks so much to Glenda Goodman and Stewart Varner for inviting me, and to everyone who showed up.

 

Slides from OPenn Demo at the American Historical Association Meeting

This week I participated in a workshop organized by the Collections as Data project at the annual meeting of the American Historical Association in Washington, DC. The session was organized by Stewart Varner and Laurie Allen, who introduced the session, and the other participants were Clifford Anderson and Alex Galarza.

The stated aim of the session was “to spark conversations about using emerging digital approaches to study cultural heritage collections,” (I’ll copy the full workshop description at the end of this post) but all of our presentations ended up focusing on the labor involved in developing our projects. This was not planned, but it was good, and also interesting that all of us independently came to this conclusion.

Clifford’s presentation was about work being done by the Scholarly Communications team at Vanderbilt University Libraries as they convert data from legacy projects (which have tended to be purpose built, siloed, and bespoke) into more tractable, reusable open data, and Alex told us about the GAM Digital Archive Project, which is digitizing materials related to human rights violations in Guatemala. Both Clifford and Alex stressed the amount of time and effort it takes to do the work behind their projects. The audience was mainly history faculty and maybe a few graduate students, and I expect they, like me, wanted to make sure the audience understood that the issue of where data comes from is arguably more important than the existence of the data itself.

My own talk was about the University of Pennsylvania’s OPenn (Primary Digital Resources for Everyone), which if you know me you probably already know about. OPenn is the website in which the Kislak Center for Special Collections, Rare Books and Manuscripts publishes its digitized collections in the public domain, as well as hosting collections for many other institutions. This includes several libraries and archives around Philadelphia who are partners on the CLIR-funded Bibliotheca Philadelphiensis project (a collaboration with Lehigh University, the Free Library of Philadelphia, Penn, and the Philadelphia Area Consortium of Special Collections Libraries), which I always mention in talks these days (I’m a co-PI and much of the work of the project is being done at Penn). I also focused my talk on the labor of OPenn, mentioning the people involved and including slides on where the data in OPenn comes from, which I haven’t mentioned in a public talk before.

Ironically I ended up spending so much time talking about what OPenn is and how it works that I didn’t have time to show much of the data, or what you can do with it. But that ended up fitting the (unplanned) theme of the workshop, and the attendees seemed to appreciate it, so I consider it a success.

Here are my slides:

Workshop abstract (from this page):

The purpose of this workshop is to spark conversations about using emerging digital approaches to study cultural heritage collections. It will include a few demonstrations of history projects that make use of collection materials from galleries, libraries, archives, or museums (GLAM) in computational ways, or that address those materials as data. The group will also discuss a range of ways that historical collections can be transformed and creatively re-imagined as data. The workshop will include conversations about the ethical aspects of these kinds of transformations, as well as the potential avenues of exploration that are opened by historical materials treated as data. Part of an IMLS-funded National Digital Forum grant, this workshop will ultimately inform the development of recommendations that aim to support cultural heritage community efforts to make collections collections more readily amenable to computational use.

The Historiography of Medieval Manuscripts in England (and the USA)

The text of a lightning talk originally presented at The Futures of Medieval Historiography, a conference at the University of Pennsylvania organized by Jackie Burek and Emily Steiner. Keep in mind that this was very lightly researched; please be kind.

Rather than the originally proposed topic, the historiography of medieval manuscript descriptions, I will instead be talking about the historiography of medieval manuscripts specifically in England and the USA, as perceived through the lens of manuscript descriptions.

We’ll start in the late 12th into the 15th century, when monastic houses cataloged the books in their care using little more than a shelf-list. Such a list would be practical in nature: the community needs to be able to know what books they own, so as books are borrowed internally or loaned to other houses (or perhaps sold) they have a way to keep track of them. Entries on the list would be very simple: a brief statement of contents, and perhaps a note on the number of volumes. There is, of course, an entire field of study around reconstructing medieval libraries using these lists, and as the descriptions are quite simple it is not an easy task.

c. 1190-1200. Cambridge, Jesus College MS 34, fol. 1r. First catalogue of the library of Rievaulx. (Plate 3 from The Libraries of the Cistercians, etc. Vol. 3 in Corpus of British Medieval Library Catalogues, 1992)
Late 13th c. Oxford, Bodleian Library MS Rawlinson B. 336, page 187. Catalogue of the library of St Radegund’s abbey at Bradsole. (Plate 5 from The Libraries of the Cistercians, etc. Vol. 3 in Corpus of British Medieval Library Catalogues, 1992)
1400. London, BL MS Additional 70507, fol. 2r. Description of the library at Titchfield (Plate 6 from The Libraries of the Cistercians, etc. Vol. 3 in Corpus of British Medieval Library Catalogues, 1992)

In the 15th and 16th centuries there were two major historical events that I expect played a major role both in a change in the reception of manuscripts, and in the development of manuscript descriptions moving forward: those are the invention of the printing press in the mid-15th century, and the dissolution of the monasteries in the mid-16th century. The first made it possible to relatively easily print multiple copies of the same book, and also began the long process that rendered manuscripts obsolete. The second led to the transfer of monastic books from institutional into private hands, and the development of private collections with singular owners. When it came to describing their books, these collectors seemed to be interested in describing for themselves and other collectors, and not only for the practical purpose of keeping track of them. Here is a 1697 reprint of a catalog published in 1600 of Matthew Parker’s private collection (bequeathed to Corpus Christi College Cambridge in 1574). You can see that the descriptions themselves are not much different from those in the manuscript lists, but the technology for sharing the catalog – and thus the audience for the catalog – is different.

1600. Ecloga Oxonio–Cantabrigiensis, tributa in libros duos, quorum prior continet catalogum confusum librorum manuscriptorum in illustrissimis bibliothecis, duarum florentissimarum Acdemiarum, Oxoniae et Catabrigiae (London, 1600; reprinted in 1697)

In the later 16th and into the 17th century these private manuscript collections began to be donated back to institutions (educational and governmental), leading to descriptions for yet other audiences and for a new purpose: for institutions to inform scholars of what they have available for their use. The next three examples, from three catalogs of the Cotton Collection (now at the British Library) reflect this movement. The first is from a catalogue published in 1696, the content description is perhaps a bit longer than the earlier examples, and barely visible in the margin is a bit of a physical description: this is a codex with 155 folios. Notably this is the first description we’ve looked at that mentions the size of the book at all, so we are moving beyond a focus only on content. This next example, from 1777, is notable because it completely forefronts the contents. This catalog as a whole is organized by theme, not by manuscript (you can see below the contents listed out for Cotton Nero A. i), so we might describe it as a catalog of the collection, rather than a catalog of the manuscripts comprising the collection.

1696. Catalogue of the manuscripts in the Cottonian Library, 1696 (facsimile 1984)
1777. A catalogue of the manuscripts in Cottonian library: to which are added many emendations and additions. With an appendix containing an account of the damage sustained by the fire in 1731; and also a catalogue of the charters preserved in the same library. British Museum Dept. of Manuscripts, 1777

The third example is from the 1802 catalog, and although it’s still in Latin we can see that there is more physical description as well as more detail about the contents and appearance of the manuscript. There is also a citation to a book in which the preface on the manuscript has been published – the manuscript description is beginning to look a bit scholarly.

1802. A catalogue of the manuscripts in the Cottonian library deposited in the British museum : printed by command of His Majesty King George III. &c. &c. &c. in pursuance of an address of the House of Commons of Great Britain. British Museum Dept. of Manuscripts, 1802

We’ll jump ahead 150 years, and we can see in that time that concern with manuscripts has spread out from the institution to include the realm of the scholar. This example is from N.R. Ker’s Catalogue of Manuscripts Containing Anglo-Saxon, rather than focusing on the books in a particular collection it is focused on a class of manuscripts, regardless of where they are physically located. The description is in the vernacular, and has more detail in every regard. The text is divided into sections as well: General description; codicological description; discussion of the hands; and provenance.

1957. N. R. Ker, Catalogue of Manuscripts Containing Anglo-Saxon. Oxford, 1957.

And now we arrive at today, and to the next major change to come to manuscript descriptions, again due to new technology. Libraries around the world, including here at Penn, are writing our manuscript descriptions using code instead of on paper, and publishing them online along with digital images of the manuscript pages, so people can not only read about our manuscripts, but also see images of them and use our data to create new things. We use the data ourselves, for example in OPenn (Primary digital resources available to everyone!) we build websites from our manuscript descriptions to make them available to the widest possible audience.

I want to close by giving a shout-out to the Schoenberg Database of Manuscripts, directed by Lynn Ransom, which is pushing the definition of manuscript descriptions in new scholarly directions. In the SDBM, a manuscript is described temporally, through entries that describe where a book was at particular moments in time (either in published catalogs, or through personal observation). As scholarly needs continue to change, and technology makes new things possible, the description of manuscripts will likewise continue to change around these, even as they have already over the last 800 years.

“Freely available online”: What I really want to know about your new digital manuscript collection

So you’ve just digitized medieval manuscripts from your collection and you’re putting them online. Congratulations! That’s great. Online access to manuscripts is so important, for scholars and students and lots of other people, too (I know a tattoo artist who depends on digital images for design ideas). As the number of collections available online has grown in recent years (DMMAP lists 545 institutions offering at least one digitized manuscript), the use of digital manuscripts by medievalists has grown right along with supply.[1] If you’re a medievalist and you study manuscripts, I’m confident that you regularly use digital images of manuscripts. So every new manuscript online is a celebration. But now, you who are making digitized medieval manuscripts available online, tell us more. How, exactly, are you making your manuscripts available? And please don’t say you’re making them freely available online.

I hate this phrase. It makes my teeth clench and my heart beat faster. It makes me feel this way because it doesn’t actually tell me anything at all. I know you are publishing your images online, because where else would you publish them (the age of CDRom for these things is long gone) and I know they are going to be free, because otherwise you’d be making a very different kind of announcement and I would be making a very different kind of complaint (I’m looking at you, Codices Vossiani Latini Online). What else can you tell me?

Here are the questions I want answered when I read about an online manuscript collection.

  1. How are your images licensed? This is going to be my first question, and for me it’s the most important because it defines what I can do with your images. Are you placing them in the public domain, licensing them CC0? This is what we do at my institution, and it’s what I like to see, since, you know, medieval manuscripts are not in copyright, at least not in the USA (I understand things are more complicated in Europe). If not CC0, then what restrictions are you placing on them? Creative Commons has a tool where you can select the restrictions you want and then gives you license options. Consider using it as part of your decision-making process. A clear license is a good license.
  2. How can I find your manuscripts? Is there a search and browse function on your site, or do I have to know what I’m looking for when I come in?
  3. Will your images be served through the International Image Interoperability Framework (IIIF)? IIIF has become very popular recently, and for good reason – it enables users to pull manuscripts from any IIIF-compliant repository into a single interface, for example comparing manuscripts from different institutions in a single browser window. A user will need access to the IIIF manifests to make this work – the manifest is essentially a file containing metadata about the manuscript and a list of links to image files. So, if you are using IIIF, will the manifests be easily accessible so I can use them for my own purposes? (For reference, e-codices links IIIF manifests to each manuscript record, and it couldn’t be easier to find them.)
  4. What kind of interface will you have? I usually assume that a page-turning interface will be provided, but if there is some other interface (like, for example, Yale University, which links individual images from a thumbnail strip on the manuscript record) I’d like to know that. Will users be able to build collections or make annotations on page images, or contribute transcriptions? I’d like to know that, too.
  5. How can I get your images? I know you’re proud of your interface, but I might want to do something else with your images, either download them to my own machine or point to them from an interface I’ve built myself or borrowed from someone else (maybe using IIIF, but maybe not). If you provide IIIF manifests I have a list of URLs I can use to point to or download your image files (more or less, depending on how your server works), but if you’re not using IIIF, is there some other way I can easily get a list of image URLs for a manuscript? For example, OPenn and The Digital Walters publish TEI documents with facsimile lists. If you can’t provide a list, can you at least share how your urls are constructed? If I know how they’re made I can probably figure out how to build them myself.

Those are the big five questions I like to have answered when I read about a new digital manuscript collection, and they very rarely are. Please, please, please, next time you announce a new collection, try to go beyond freely available online and tell us all more about how your collection will be made available, and what users will be able and allowed to do with it.

[1] In 2002 33% of survey respondents reported manuscript facsimiles “print mostly, electronic sometimes” and 47% reported using “print only”. In 2011, 44% reported using them “electronic mostly, print sometimes” and 17% reported using “electronic only”. This is an enormous shift. From Dot Porter, “Medievalists and the Scholarly Digital Edition,” Scholarly Editing: The Annual of the Association for Documentary Editing Volume 34, 2013. http://www.scholarlyediting.org/2013/essays/essay.porter.html

“What is an edition anyway?” My Keynote for the Digital Scholarly Editions as Interfaces conference, University of Graz

This week I presented this talk as the opening keynote for the Digital Scholarly Editions as Interfaces conference at the University of Graz. The conference is hosted by the Centre for Information Modelling, Graz University, the programme chair is Georg Vogeler, Professor of Digital Humanities and the program is endorsed by Dixit – Scholarly Editions Initial Training Network. Thanks so much to Georg for inviting me! And thanks to the audience for the discussion after. I can’t wait for the rest of the conference.

What is an edition anyway?

Thank you to Georg Voegler for inviting me to present the keynote at the symposium, thank you Dixit for making this conference possible, and danke to welcoming speakers for welcoming us so warmly. I’m excited to be here and looking forward to hear what the speakers have to say about digital scholarly editions as interfaces. Georg invited me here to talk about my work on medievalists use of digital editions. But first, I have a question.

What is an edition? I think we all know what an edition is, but it’s still fun, I find, to investigate the different ways that people define edition, or think about editions, so despite the title of this talk, most of what I’m going to be talking about is various ways that people think about editions and why that matters to those of us in the room who spend our time building editions, and at the end I’m going to share my thoughts on directions I’d like to see medieval editions in particular take in the future.

screen-shot-2016-09-21-at-12-07-15-pm

I’ll admit that when I need a quick answer to a question, often the first place I turn to is Google. Preparing for this talk was no different. So, I asked Google to define edition for me, and this is what I got. No big surprise. Two definitions, the first “a particular form or version of a published text,” and the second “the total number of copies of a book, newspaper, or other published material issued at one time.” The first definition here is one that’s close to the way I would answer this question myself. I think I’d generally say that an edition is a particular version of a text. It might be a version compiled together from other versions, like in a scholarly critical edition, but need not be. I’m a medievalist, so this got me thinking about texts written over time, and what might make a text rise to the level of being an “edition”, or not.

screen-shot-2016-09-19-at-10-48-59-am
Bankes Papyrus, British Museum Papyrus 114.

So here is some text from the Illiad, written on a papyrus scroll I the 2nd century BC. The scroll is owned by the British Museum, Papyrus 114 also known as the Bankes Papyrus. The Illiad, you probably know, is an ancient Greek epic poem set during the Trojan war, which focuses on a series of battles between King Agamemnon and the warrior Achilles. If you are a Classicist, I apologize in advance for simplifying a complex textual situation. If you aren’t a Classicist, if you’ve read the Illiad you probably read it in a translation from Greek into your native language, and this text most likely would have been presented to you as “The Text of The Illiad” – that is, a single text. That text, however, is built from many small material fragments that were written over a thousand years, and which represent written form of text that was composed through oral performance. The Bankes Papyrus is actually one of the most complete examples of the Illiad in papyrus form – most surviving examples are much more fragmentary than this.

Venetus A, aka Marcianus Graecus Z. 454 [=822] (ca. 950), fol. 12r
Venetus A, aka Marcianus Graecus Z. 454 [=822] (ca. 950), fol. 12r
As far as we know the text of the Illiad was only compiled into a single version in the 10th century, in what is known as the Venetus A manuscript, now at the Marciana Library in Venice. I have an image of the first page of the first book of the Illiad here. You can see that this presents much more than the text, which is the largest writing in the center-left of the page. This compiled text is surrounded by layers of glossing, which includes commentary as well as textual variants.

UPenn Ms Codex 1058, fol. 12r.
UPenn Ms Codex 1058, fol. 12r.

The Venetus A is just one example of a medieval glossed manuscript. Another more common genre of glossed manuscripts are Glossed Psalters, that is, texts of the Psalter written with glosses, quotes from the Church Fathers, included to comment on specific lines. Here is an example of a Glossed Psalter from the University of Pennsylvania’s collection. This is a somewhat early example, dated to around 1100, which is before the Glossa Ordinara was compiled (the Glossa Ordinaria was the standard commentary on the scriptures into the 14th century). Although this isn’t as complex as the Venetus A, you can still see at least two levels of glossing, both in the text and around the margins.

UPenn Ms. Codex 1640, fol. 114r.
UPenn Ms. Codex 1640, fol. 114r.

One more example, a manipulus florum text from another University of Pennsylvania manuscript. Thomas of Ireland’s Manipulus florum (“Handful of flowers”), compiled in the early 14th century, belongs to the genre of medieval texts known as florilegia, collections of authoritative quotations that are the forerunners of modern reference works such as Bartlett’s Familiar Quotations and The Oxford Dictionary of Quotations. This particular florilegium contains approximately 6000 Latin proverbs and textual excerpts attributed to a variety of classical, patristic and medieval authors. The flora are organized under alphabetically-ordered topics; here we see magister, or teacher. The red text is citation information, and the brown text is the quotes.

marsden

Now let’s take a look at a modern edition, Richard Marsden’s 2008 edition of The Old English Heptateuch published with the Early English Text Society. A glance at the table of contents reveals an introduction with various sections describing the history of editions of the text, the methodology behind this edition, and a description of the manuscripts and the relationships among them. This is followed by the edited texts themselves, which are presented in the traditional manner: with “the text” at the top of the page, and variant readings and other notes – the apparatus – at the bottom. In this way you can both read the text the editor has decided is “the text”, but also check to see how individual manuscripts differ in their readings. It is, I’ll point out, very similar to the presentation of the Illiad text in the Venetus A.

screen-shot-2016-09-21-at-12-22-22-pm

Electronic and digital editions have traditionally (as far as we can talk about there being a tradition of these types of editions) presented the same type of information as print editions, although the expansiveness of hypertext has allowed us to present this information interactively, selecting only what we want to see at any given moment and enabling us to follow trails of information via links and pop-ups. For example I have here Prue Shaw’s edition of Dante’s Commedia, published by the Scholarly Digital Editions. Here we have a basic table of contents, which informs us of the sections included in the edition.

screen-shot-2016-09-21-at-12-26-08-pm

Here we have the edited text from one manuscript, with the page image displayed alongside (this of course being one of the main differences between digital and print editions), with variant readings and other notes available at the click of the mouse.

screen-shot-2016-09-21-at-12-29-37-pm

A more extensive content list is also available via dropdown, and with another click I can be anywhere in the edition I wish to be.

screen-shot-2016-09-21-at-12-30-41-pm

Here I am at the same point in the text, except the base text is now this early printed edition, and again the page image is here displayed so I can double-check the editor’s choices should I wish to.

With the possible exception of the Bankes Papyrus, all of these examples are editions. They reflect the purpose of the editor, someone who is not writing original text but is compiling existing text to suit some present desire or need. The only difference being the material through which the edition is presented – handwritten parchment or papyrus, usually considered “primary material”, vs. a printed book or digital media, or “secondary material”. And I could even make an argument that the papyrus is an edition as well, if I posit that the individual who wrote the text on the papyrus was compiling it from some other written source or even from the oral tradition.


I want to take a step back now from the question of what is an edition and talk a bit about why, although the answer to this may not matter to me personally, it does matter very much when you start asking people their opinions about editions. (I am not generally a fan of labels and prefer to let things be whatever they are without worrying too much about what I should call them. I’m no fun at parties.)

I’ve been studying the attitudes of medievalists towards digital resources, including editions, since I was a library science graduate student back in 2002. In May 2001 I graduated with an MA from the Medieval Institute at Western Michigan University, with a focus on Anglo-Saxon language, literature, and religious culture. I had taken a traditional course of work, including courses in paleography and codicology, Old English, Middle English, and Latin language and literature, and several courses on the reading of religious texts, primarily hagiographical texts. I was keenly aware of the importance of primary source materials to the study of the middle ages, and I was also aware that there were CD-ROMs available that made primary materials, and scholarly editions of them, available at the fingertips. There were even at this time the first online collections of medieval manuscripts (notably the Early Medieval Manuscript Collection at the Bodleian Library at Oxford). But I was curious about how much these new electronic editions (and electronic journals and databases, too) were actually being used by scholars. I conducted a survey of medievalists, asking them about their attitudes toward, and use of, electronic resources. I wrote my findings in a research paper, “Medievalists’ Use of Electronic Resources: The Results of a National Survey of Faculty Members in Medieval Studies,” which is still available if you want to read it, in the IU Bloomington institutional repository.

I conducted a second survey in 2011, and compared findings from the two surveys in an article published in 2013 in Scholarly Editing, “Medievalists and the Scholarly Digital Edition.” The methodologies for these surveys were quite different (the first was mailed to a preselected group of respondents, while the second was sent to a group but also advertised on listservs and social media), and I’m hesitant to call either of them scientific, but with these caveats they do show a general trend of usage in the 9 years between, and this trend reflects what I have seen anecdotally.

2002

In this chart from 2002, we see that 7% of respondents reported using electronic and print editions the same, 44% print mostly, and 48% print only.

2009

Nine years later, while still no-one reports using only electronic editions, 7% report using electronic mostly, 12% electronic and print the same, 58% print mostly, and 22% print only. The largest shift is from “print only” to “print mostly”, and it’s most clearly seen on this chart.

screen-shot-2016-09-23-at-12-19-31-pm

Now this is all well and good, and you’d be forgiven for looking at this chart and coming to the conclusion that all these folks had finally “seen the light” and were hopping online and on CD Rom to check out the latest high-tech digital editions in their field. But the written comments show that this is clearly not the case, at least not for all respondents, and that any issues with the survey data come from a disconnect between how I conceive of a “digital edition” and how the respondents conceive of the same.

screen-shot-2016-09-23-at-12-20-20-pm

Exhibit A: Comments from four different respondents explaining when they use digital editions and how they find them useful. I won’t read these to you, but I will point out that the phrase Google Books has been bolded in three of them, and while the other one doesn’t mention Google Books by name, the description strongly implies it.

I have thought about this specific disconnect a lot in the past five years, because I think that it does reflect a general disconnect between how we who create digital editions think about editing and editions, and how more traditional scholars and those who consume editions think about them. Out of curiosity, as I was working on this lecture I asked on Facebook for my “friends” to give me their own favorite definition of edition (not digital edition, just edition), and here are two that reflected the general consensus. The first is very material, a bibliographic description that would be favored by early modernists (as a medievalist I was actually a bit shocked by this definition, although I know what an edition is, bibliographically speaking, I wasn’t thinking in that direction at that point, I was really thinking of a “textual edition”), while the second focuses not so much on how the text was edited but on the apparatus that comes along with it. Thus, an edited text by itself isn’t properly an edition, it requires material explaining the text to be a “real” edition. Interestingly, this second definition arguably includes the Venetus A manuscript we looked at earlier.

This spring, in preparation for this lecture, I created a new survey, based on the earlier surveys (which were more or less identical) but taking as a starting place Patrick Sahle’s definition of a Digital Scholarly Edition:

Digital scholarly editions are not just scholarly editions in digital media. I distinguish between digital and digitized. A digitized print edition is not a “digital edition” in the strict sense used here. A digital edition can not be printed without a loss of information and/or functionality. The digital edition is guided by a different paradigm. If the paradigm of an edition is limited to the two-dimensional space of the “page” and to typographic means of information representation, than it’s not a digital edition.

In this definition Sahle differentiates between a digital edition, which essentially isn’t limited by typography and thus can’t be printed, and a digitized edition, which is and which can. In practice most digitized editions will be photographic copies of print editions, although of course they could just be very simple text rendered fully in HTML pages with no links or pop-ups. While the results of these lines of questioning aren’t directly comparable with the 2002 and 2011 results, I think it’s possible to see a general continuing trend towards a use of digitized editions, if not towards digital editions following Sahle’s definition.

First, a word about methodology. This year’s respondents were entirely self-selecting, and the announcement of the survey, which was online, went out through social media and listservs. I didn’t have a separate selected group. There were 337 total respondents although not every respondent answered every question.

digital

This year, I asked respondents about their use of editions – digital, digitized, and print – over the past year, focusing on the general number of times they had used the editions. Over 90% of respondents report using digital editions at all, although only just over 40% report using them “more times than I can count”.

screen-shot-2016-09-21-at-12-02-58-pm

When asked about digitized editions, however, over 75% report using them “more times than I can count”, and only 2 respondents – .6% – report using them not at all.

screen-shot-2016-09-21-at-12-05-01-pm

Print edition usage is similar to digitized edition usage, with about 78% reporting they use them “more times than I can count” and no respondents reporting they use them not at all. A chart comparing the three types of editions side-by-side shows clearly how similar numbers are for digitized and print editions vs. digital editions.

Comparing usage of Digital Editions, Digitized Editions, and Print Editions.
Comparing usage of Digital Editions, Digitized Editions, and Print Editions.

What can we make of this? Questions that come immediately to my mind include: are we building the editions that scholars need? That they will find useful? Are there editions that people want that aren’t getting made? But also: Does it matter? If we are creating our editions as a scholarly exercise, for our own purposes, does it matter if other people use them or not? It might hurt to think that someone is downloading a 19th century edition from Google Books instead of using my new one, but is it okay? And if it’s not, what can we do about that? (I’m not going to try to answer that, but maybe we can think about it this week)


I want to change gears and come back now to this question, what is an edition. I’ve talked a bit about how I conceive of editions, and how others do, and how if I’m going to have a productive conversation about editions with someone (or ask people questions on a survey) it’s important to make sure we’re on the same page – or at least in the same book – regarding what we mean when we say “edition”. But now I want to take a step back – way back – and think about what an edition is at the most basic level. On the Platonic level. If an edition is a shadow on the wall, what is casting that shadow? Some people will say “the urtext” which I think of (not unkindly, I assure you) as the floating text, the text in the sky. The text that never existed until some editor got her hands on it and brought it to life as Viktor Frankenstein brought to life that poor, wretched monster in the pages of Mary Shelley’s classic horror story. I say, we know texts because someone cared enough to write them down, and some of that survives, so what we have now is a written record that is intimately connected to material objects: text doesn’t float, text is ink on skin and ink on paper and notches in stone, paint on stone, and whatever else borne on whatever material was handy. So perhaps we can posit editions that are cast from manuscripts and the other physical objects on which text is borne, not simply being displayed alongside text, or pointed to from text, or described in a section “about the manuscript”, but flipping the model and organizing the edition according to the physical object.

I didn’t come up with this idea, I am sad to say. In 2015, Christoph Flüeler presented a talk at the International Congress on Medieval Studies titled “Digital Manuscripts as Critical Edition,” later posted to the Schoenberg Institute for Manuscript Studies blog. In this essay Flüeler asks: “… how [does] a digital manuscript [stand] in relation to a critical edition of a text. Can the publication of a digital manuscript on the internet be understood as an edition? Further: could such an edition even be regarded as a critical edition?” – His answer being, of course, yes. I won’t go into his arguments, instead I’m going to use them as a jumping-off point, but I encourage you to read his essay.

This concept is very appealing to me. I suppose I should admit now, almost at the end of my keynote, that I am not presently doing any textual editing, and I haven’t in a few years. My current position is “Curator, Digital Research Services” in the Schoenberg Institute for Manuscript Studies at the University of Pennsylvania Libraries in Philadelphia. This position is a great deal of fun and encompasses many different responsibilities. I am involved in the digitization efforts of the unit and I’m currently co-PI of Bibliotheca Philadelphiensis, a grant funded project that will digitize all the medieval manuscripts in Philadelphia, which I can only mention now but I’ll be glad to talk about more later to anyone interested in hearing about it. All our digital images are released in the public domain, and published openly on our website, OPenn, along with human readable HTML descriptions, links to download the images, and robust TEI manuscript descriptions available for download and reuse.

I also do a fair amount of what I think of as experimental work, including new ways to make manuscripts available to scholars and the public. I’ve created electronic facsimiles in the epub format, a project currently being expanded by the Penn Libraries metadata group, which are published in our institutional repository, and I also make short video orientations to our manuscripts which are posted on YouTube and also made available through the repository. In the spring I presented on OPenn for a mixed group of librarians and faculty at Vanderbilt University in Tennessee, after which an art historian said to me, “this open data thing is great and all, but why can’t we just have the manuscripts as PDFs?” So I held my nose and generated PDF files for all our manuscripts, then I did it for the Walters Art Museum as well for good measure. I posted them all to Google Docs, along with spreadsheets as a very basic search facility.

Collation visualization via VisColl
Collation visualization via VisColl

I’ve also been working for the past few years on developing a system for modeling and visualizing the physical collation of medieval manuscripts (this is distinct from textual collation which involves comparing versions of texts). With a bit of funding from the Mellon Foundation and the collaboration of Alexandra Gillespie and her team at the University of Toronto I am very excited for the next version of that system, which we call VisColl (it is on GitHub if like to check it out – you can see the code and there are instructions for creating your own models and visualizations). The next version will include facilities for connecting tags, and perhaps transcriptions, to the deconstructed manuscript. I hadn’t thought of the thing that this system generates as an edition, but perhaps it is. But instead of being an edition of a text, you might think of it as an edition of a manuscript that happens to have text on it (or sometimes, perhaps, won’t).

I am aware that I’m reaching the end of my time, so I just want to take a few minutes to mention something that I see playing an enormous role in the future of digital-manuscripts-as-editions, and that’s the International Image Interoperability Framework, or IIIF. I think Jeffrey Witt may mention IIIF in his presentation tomorrow, and perhaps others will as well although I don’t see any IIIF-specific papers in the schedule. At the risk of simplifying it, IIIF is a set of Application Programming Interfaces (APIs) – sets of routines, protocols, and tools – to enable the interoperability of image repositories. This means you can use images from different repositories in the same browser or other tool. Here, quickly, is an example of how that can work.

screen-shot-2016-09-21-at-5-25-06-pm

e-codices publishes links to IIIF manifests for each of their manuscripts. A manifest is a json file that contains descriptive and structural metadata for a manuscript, including links to images that are served through a IIIF server. You can look at it. It is human readable, kind of, but it’s a mess.

Two e-codices manuscripts (and others) in Mirador.
Two e-codices manuscripts (and others) in Mirador.

However, if you copy that link and paste it into a IIIF-conformant tool such as Mirador (a simple IIIF browser which I have installed on my laptop) you can create your own collection and then view and manipulate the images side-by-side. Here I’ve pulled in two manuscripts from e-codices, both copies of the Roman de la Rose.

screen-shot-2016-09-21-at-5-38-49-pm

And here I can view them side by side, I can compare the images, compare the text, and I can make annotations on them too. Here is tool for creating editions of manuscripts.

(A quick side note: of course there are other tools that offer image tagging ability, including the DM project at SIMS, but what IIIF offers is not a single tool but a system for building and viewing editions, and all sorts of other unnamable things, using manuscripts in different institutions without having the move the images around. I cannot stress how radical this is for medieval manuscript studies.)

However as fond as I am of IIIF, and as promising I think it is for my future vision, my support for it comes with some caveats. If you don’t know, I am a huge proponent of open data, particularly open manuscript data. The Director of the Schoenberg Institute is Will Noel, an open data pioneer in his own right who has been named a White House Champion of Change, and I take him as my example. I believe that in most cases, when institutions digitize their manuscript collections they are obligated to release those images into the public domain, or at the very least under a creative commons: by license (to be clear, a license that would allow commercial use) and that manuscript metadata should be licensed for reuse. My issue with IIIF is that is presents the illusion of openness without actual openness. That is, if images are published under a closed license, if you have the IIIF manifest you can use them to do whatever you want, as long as you’re doing it through IIIF-compliant software. You can’t download them and use them outside of the system (to, say, generate PDF or epub facsimiles, or collation visualizations). I love IIIF for what it makes possible but I also think it’s vital to keep data open so people can use it outside of any given system.

DATA OVER INTERFACE
DATA OVER INTERFACE

We have a saying around the Schoenberg Institute, Data Over Interface. It was introduced to us by Doug Emery, our data programmer who was also responsible for the curation of the data of the Archimedes Palimpsest Project and the Walters Art Museum manuscripts. We like it so much we had it put on teeshirts (You can order your own here!). I like it, not because I necessarily agree that the data is always more important than the interface, but because it makes me think about whether or not the data is always more important than the interface. Excellent, robust data with no interface isn’t easily usable (although a creative person will always find a way), but an excellent interface with terrible data or no data at all is useless as anything other than a show piece. And then inevitably my mind turns to manuscripts, and I begin to wonder, in the case of a manuscript, what is the data and what is the interface? Is a manuscript simply an interface for the text and whatever else it bears, or is the physical object data of its own that begs for an interface to present it, to pull it apart and put it back together in some way to help us make sense of it or the time it was created? Is it both? Is it neither?

I am so excited to be here and to hear about what everyone in this room is thinking about editions, and interfaces, and what editions are, and what interfaces are and are for. Thank you so much for your time, and enjoy the conference.

Presented September 23, 2016.

UPenn’s Schoenberg Manuscripts, now in PDF

Hi everyone! It’s been almost a year since my last blog post (in which I promised to post more frequently, haha) so I guess it’s time for another one. I actually have something pretty interesting to report!

Last week I gave an invited talk at the Cultural Heritage at Scale symposium at Vanderbilt University. It was amazing. I spoke on OPenn: Primary Digital Resources Available to Everyone, which is the platform we use in the Schoenberg Institute for Manuscript Studies at the University of Pennsylvania Libraries to publish high-resolution digital images and accompanying metadata for all our medieval manuscripts (I also talked for a few minutes about the Schoenberg Database of Manuscripts, which is a provenance database of pre-1600 manuscripts). The philosophy of OPenn is centered on openness: all our manuscript images are in the public domain and our metadata is licensed with Creative Commons licenses, and none of those licenses prohibit commercial use. Next to openness, we embrace simplicity. There is no search facility or fancy interface to the data. The images and metadata files are on a file system (similar to the file systems on your own computer) and browse pages for each manuscript are presented in HTML that is processed directly from the metadata file. (Metadata files are in TEI/XML using the manuscript description element)

Screen Shot 2016-06-10 at 2.20.26 PM

This approach is actually pretty novel. Librarians and faculty scholars alike love their interfaces! And, indeed, after my talk someone came up to me and said, “I’m a humanities faculty member, and I don’t want to have to download files. I just want to see the manuscripts. So why don’t you make them available as PDF so I can use them like that?”

This gave me the opportunity to talk about what OPenn is, and what it isn’t (something I didn’t have time to do in my talk). The humanities scholar who just wants to look at manuscripts is really not the audience for OPenn. If you want to search for and page through manuscripts, you can do that on Penn in Hand, our longstanding page-turning interface. OPenn is about data, and it’s about access. It isn’t for people who want to look at manuscripts, it’s for people who want to build things with manuscript data. So it wouldn’t make sense for us to have PDFs on OPenn – that’s just not what it’s for.

Landing page for Penn in Hand.
Landing page for Penn in Hand.

HOWEVER. However. I’m sympathetic. Many, many people want to look at manuscripts, and PDFs are convenient, and I want to encourage them to see our manuscripts as available to them! So, even if Penn isn’t going to make PDFs available institutionally (at least, not yet – we may in the future), maybe this is something I could do myself. And since all our manuscript data is available on OPenn and licensed for reuse, there is no reason for me not to do it.

So here they are.

If you click that link, you’ll find yourself in a Google Drive folder titled “OPenn manuscript PDFs”. In there is currently one folder, “LJS Manuscripts.” In that folder you’ll fine a link to a Google spreadsheet and over 400 PDF files. The spreadsheet lists all the LJS manuscripts (LJS = Laurence J. Schoenberg, who gifted his manuscripts to Penn in 2012) including catalog descriptions, origin dates, origin locations, and shelfmarks. Let’s say you’re interested in manuscripts from France. You can highlight the Origin column and do a “Find” for “France.” It’s not a fancy search so you’ll have to write down the shelfmarks of the manuscripts as you find them, but it works. Once you know the shelfmarks, go back into the “LJS Manuscripts” folder and find and download the PDF files you want. Note that some manuscripts may have two PDF files, one with “_extra” in the file name. These are images that are included on OPenn but not part of the front-to-back digitization of a manuscript. They might include things like extra shots of the binding, or reference shots.

If you are interested in knowing how I did this, please read on. If not, enjoy the PDFs!

Screen Shot 2016-06-10 at 2.24.32 PM

How I did it

I’ll be honest, this is my favorite part of the exercise so thank you for sticking with me for it! There won’t be a pop quiz at the end although if you want to try this out yourself you are most welcome to.

First I downloaded all the web jpeg files from the LJS collection on OPenn. I used wget to do this, because with wget I am able to get only the web jpeg files from all the collection folders at once. My wget command looked like this:

wget -r -np -A “_web.jpg” http://openn.library.upenn.edu/Data/0001/

Brief translation:

wget = use the wget program
-r = “recursive”, basically means go into all the child folders, not just the folder I’m pointing to
-np = “no parent”, basically means don’t go into the parent folders, no matter what
-A “_web.jpg” = “accept list”, in this case I specified that I only want those files that contain _web.jpg (which all the web jpeg files on OPenn do)
http://openn.library.upenn.edu/Data/0001/ = where all the LJS manuscript data lives

I didn’t use the -nd command, which I usually do (-nd = “no directory”, if you don’t use this command you get the entire file structure for the file server starting from root, which in this case is openn.library.upenn.edu. What this means, practically, is that if you use wget to download one file from a directory five levels up, you get empty folders four levels deep then the top director with one file in it. Not fun. But in this case it’s helpful, and you’ll see why later.

At my house, with a pretty good wireless connection, it took about 5 hours to download everything.

I used Automator to batch create the PDF files. After a bit of googling I found this post on batch creating multipage PDF files from jpeg files. There are some different suggestions, but I opted to use Mac’s Automator. There is a workflow linked from that post. I downloaded that and (because all of the folders of jpeg images I was going to process are in different parent folders) I replaced the first step in the workflow, which was Get Selected Finder Items, with Get Specified Finder Items. This allowed me to search in Automator for exactly what I wanted. So I added all the folders called “web” that were located in the ancestor folder “openn.library.upenn.edu” (which was created when I downloaded all the images from OPenn in the previous step). In this step Automator creates one PDF file named “output.pdf” for each manuscript in the same location as that manuscript’s web jpeg images (in a folder called web – which is important to know later).

Once I created the PDFs, I no longer needed the web jpeg files. So I took some time to delete all the web jpegs. I did this by searching in Finder for “_web.jpg” in openn.library.upenn.edu and then sending them all to Trash. This took ages, but when it was done the only thing in those folders were output.pdf files.

I still had more work to do. I needed to change the names of the PDF files so I would know which manuscripts they represented. Again, after a bit of Googling, I chanced upon this post which includes an AppleScript that did exactly what I needed: it renames files according to the path of their location on the file system. For example, the file “output.pdf” located in Macintosh HD/Users/dorp/Downloads/openn/openn.library.upenn.edu/Data/0001/ljs101/data/web would be renamed “Macintosh HD_Users_dorp_Downloads_openn_openn.library.upenn.edu_Data_0001_ljs101_data_web_001.pdf”. I’d never used AppleScript before so I had to figure that out, but once I did it was smooth sailing – just took a while. (To run the script I copied it into Apple’s Script Editor, hit the play button, and selected openn.library.upenn.edu/Data/0001 when it asked me where I wanted to point the script)

Finally, I had to remove all the extraneous pieces of the file names to leave just the shelfmark (or shelfmark + “extra” for those files that represent the extra images). Automator to the rescue again!

  1. Get Specified Finder Items (adding all PDF files located in the ancestor folder “http://openn.library.upenn.edu”)
  2. Rename Finder Items to replace text (replacing “Macintosh HD_Users_dorp_Downloads_openn_openn.library.upenn.edu_Data_0001_” with nothing) –
  3. Rename Finder Items to replace text (replacing “_data_web_001” with nothing)
  4. Rename Finder Items to replace text (replacing “_data_extra_web_001” with “_extra” – this identifies PDFs that are for “extra” images)

The last thing I had to do was to move them into Google Docs. Again, I just searched for “.pdf” in Finder (just taking those that are in openn.libraries.upenn.edu/Data/0001) and dragged them into Google Drive.

All done!

The spreadsheet I generated by running an XSLT script over the TEI manuscript descriptions (it’s a spreadsheet I created a couple of years ago when I first uploaded data about the Penn manuscripts on Viewshare. Leave a comment or send me a note if that sounds interesting and I’ll make a post on it.