Using VisColl to Visualize Parker on the Web: Reports on an experiment

This is the full text of a talk I presented at the Parker on the Web 2.0 Symposium in Cambridge on March 16, 2018 (Please note addendum at the end which addresses an issue that came up in discussion later in the day.)

I want to begin my presentation by talking about interface.

DATA OVER INTERFACE

A couple of years ago I presented a keynote at a digital humanities conference on digital editing in which I made the argument that data for a project should take precedence over the interfaces used to present that data. (I stole this idea from my colleague Doug Emery, and I liked it so much, I had it put on a teeshirt).  In my talk today I want to investigate how data and interface work together, how existing interfaces can influence both the data we gather and the development of new interfaces, and some ways that we can think around existing interfaces to develop new ones (and what this in turn means for our data).

 

This is MS 433, a Miscellany copied in a number of hands from the 13th into the late 15th century. If you want to see this manuscript, you have a few different options, which you can access through the menu in the top right.

The options are: Image View, Book View, Scroll View, and Gallery View. You probably know exactly what you’ll get when you make a selection here: Image View will present you with a single image, Book View will show the book openings, also known as facing pages (as Dr Anne McLaughlin said in her introduction at the Symposium, Book View presents the images “as a book, so when I turn the pages, it looks like a book”), Scroll View will show all the page images in a continuous row that you can scroll through back and forth, and Gallery View will show all page images as thumbnails in a single page.

Each of these views serves a different purpose: Image View, Book View, and Scrolling view present the images in a size large enough to read, with slightly different methods for moving through the book, while gallery view is more like a finding tool that also gives you the ability to get the “sense” of the aesthetic contents of a book: the relative size of script and written area, distribution of illuminations or miniatures, that kind of thing (as Anne said in her introduction, in this view you can “look at the whole thing – look for initials, for something pretty to look at”). You wouldn’t read a text in the Gallery view, you would select an image from that view and then interact with that larger image (clicking on a thumbnail in the Gallery view on Parker takes you to the Image View).

I want to consider for a moment why we present digital manuscript images in these ways. Let’s start by looking at some examples of non-digital manuscript facsimiles.

The Exeter book of Old English poetry . London, Printed and Pub. for the Dean and Chapter of Exeter Cathedral by P. Lund, Humphries & Co., ltd., 1933. Limited to twelve copies, unnumbered and not for sale and two hundred and fifty copies numbered and for sale of which this is no. 182 PR 1490 .A1 1933 Special Coll Oversize (University of Arizona)

For example, here’s an opening from the 1933 Early English Manuscripts in Facsimile facsimile of the Exeter Book. The pages face each other in the manuscript (this is 65b and 66a), but they’ve been decontextualized, presented in frames and with labels underneath.

Bestiario di Peterborough. Rome: Salerno Editrice, 2004

Compare this with the Salerno Editrice edition of the Peterborough Bestiary, published in 2004, which looks very much like what I imagine the manuscript looks like (I haven’t seen it so I can’t say for sure, but it definitely looks like a manuscript, unlike the Exeter Book facsimile, which looks like pictures of pages reproduced in a modern book).

Microfilm reader and microfilm,
https://blogs.acu.edu/csart/2017/02/08/from-microfilm-to-mass-media-biblical-manuscripts-in-the-digital-age/

And here’s something that is probably familiar to many of us: Microfilm, which presents images on a long ribbon of film, which you scroll through a special machine to find whichever page you want.

 

Microfiche Reader, linked example from https://www.abc-clio.com/ODLIS/odlis_m.aspx#microfiche

Finally, there’s Microfiche, which consists of rectangles of film onto which small images of pages are presented in a grid.

I expect you can see where I’m going with this, because I’m not exactly being subtle. The options for viewing manuscripts in Parker on the Web are basically the same as they have always been. The difference is that instead of having to go to a library to check out a book or access a reader (or order a book through interlibrary loan, if your library doesn’t own it), that you can access them in your office, or at your house, at all times of day (as long as your Internet is working, and the system isn’t down).

It’s not just the Mirador Viewer (the interface that provides image access in Parker on the Web) that has these options, every online environment for viewing medieval manuscripts will have some similar setup with at least a page-turning interface and frequently a selection of the other three. E-codices is the only interface I know of that has another option: to view the front and back of a leaf at the same time, which is pretty cool (The Scroll View also shows the front and back of leaves side by side, but in e-codices you can purposefully select this view. If you know of any other interface with unique views I would be very happy to know about them).

A System: Data + Processes

Why is it the case that all manuscript libraries have basically the same interfaces? One reason is probably because, as we can see from the non-digital examples above, that’s the way we’ve always done it. We are used to seeing manuscripts as single pages, and facing pages, and scrolling pages, and galleries of pages, so that’s how we present them digitally. But it becomes a self-fulfilling prophecy. This is how we view manuscripts, so we create systems that allow us to look at manuscripts in this way. If we want to look at manuscripts in a different way, we need to build new systems. Keep in mind that in a computer system you have two things that need to work together: You need data (information presented in a format that the computer can work with), and you need processes (software or scripts that take that data and do something with it). (This is a really simplified view, of course, but I think it works pretty well)

Parker on the Web: A IIIF System

Parker on the Web, for example, is a IIIF system, so in order to function it needs IIIF Manifests, which provide metadata in a specific format in addition to links to images served in a specific way, and it needs the IIIF server to serve the images, and the IIIF APIs (or more properly, software built to work with the APIs). If any piece of this system doesn’t meet specification – if the manifest is formatted incorrectly, or the image links don’t point to a IIIIF image server, or the software doesn’t reference the APIs correctly – the system won’t work. Without both data and processes – data and processes designed to work together – you won’t have a working system.

I’m interested in creating new ways to present digitized manuscripts, and the frame I’m using is that of the manuscripts collation. Rather than displaying a digitized manuscript only as a series of images of pages arranged from beginning to end, I want to create displays that take into account pages as leaves connected to each other through the pattern of the quiring: the collation.

M. R. James, A Descriptive Catalogue of the Manuscripts in the Fitzwilliam Museum (Cambridge University Press, 1895)

Collation isn’t a new way to think about manuscripts. In his 1895 Catalog of the Fitzwilliam Museum, M. R. James wrote a description of how to collate a manuscript, and also included collation formulas in most of the manuscript descriptions (a few random examples are in the figure below).

Examples of collation formulas from M. R. James’ Fitzwilliam catalogue

The Parker on the Web also includes collation formulas.

Building a system that considers a manuscript’s quiring in the display should be possible. We have the information (in the form of collation formulas), so we should be able to build processes to act on that. But of course it’s not that simple, because although a collation formula contains the information a person might need to construct a diagram of the codex, it isn’t formatted in a way that is able to be processed by a computer. It’s not an effective piece of data for a system of the type I describe above. The collation formula isn’t data, it’s a visualization of data, just one way to express the physical collation of a manuscript among many possibilities, and which visualization you choose will depend on what you want to do with it. For example, formulas work well in library manuscript descriptions or catalog records because they are compact and textual, while diagrams might be better suited for a scholarly essay or book because they can be annotated. There are other views one could take of the same information; I’m quite fond of this synoptic chart that shows how different texts and image cycles are dispersed through this miscellany.

However it takes work (both time and effort) to write formulas and draw diagrams and build charts. This is labor that doesn’t have to be repeated! What we need isn’t a formula, but a specially formatted, data-oriented description that can be turned into many different versions for different purposes.

VisColl as a system

This is where VisColl comes in. Briefly, VisColl is a system that consists of a data model, which is basically a set of rules, that you can use as a guide to build collation models of manuscripts, and then scripts that you can use to process the collation model to generate different views of that model. We currently have three working scripts: one that generates diagrams, one that generates a presentation of leaves as conjoins (which we call the bifolia view – this view requires digitized page images) and one that generates collation formulas. 

We are currently at an in-between stage with VisColl. We had a first version of our data model, and we have developed a second version but that one doesn’t have good visualizations yet, so today I’m going to talk about our first model, but I’m happy to answer questions about the second data model later.

The prototype of VisColl took collation formulas from The Walters Art Museum in Baltimore’s Digital Walters collection and generated diagrams directly from them. It was this prototype work that convinced me that generating data out of existing formulas was a terrible idea and would never work at scale. Nevertheless, I decided that the first step in my experiment would be to attempt to parse the collation formulas in Parker on the Web and convert them into XML files following the rules for our collation models. I wrote scripts to pull the formulas out of the Parker records and got to work figuring out rules that would describe the conversion from formula to XML. As I spent a few hours on this, I was reminded of why we decided to move from processing formulas to creating new models in the first place.

The collation formulas in Parker on the Web are inconsistent (this is not a criticism of Parker on the Web – the formulas come from different catalogues created over time by many different people, with no shared guidelines. The same thing would happen in any project that combines existing catalogs). Unlike with printed books, there is no standard for manuscript collation formulas, and the formulas in Parker on the Web have a lot of variance among them, notably that some use Arabic numerals, some Roman numerals, and some letters, while some describe flyleaves as quires and some do not. Because of the inconsistency, it was very difficult to get a handle on every single thing that would need to be caught by a process in order to convert every detail of a formula into an XML model. The use of letters, Arabic numerals, and Roman numerals is one example. In order to identify quires I would need a script that would be able to interpret each of these, to recognize when a quire identified by a letter was a set of flyleaves and when not, and to be able to generate multiple quires when presented with a span of numbers or letters.

Penn Collation Modeler

It is possible that this is something that could be done, given enough time and expertise, but given my constraints I was clearly not going to be able to do it myself for this talk. So instead, I turned to the Collation Modeler, which is the tool that we use at Penn to build models from scratch. (Another implementation of VisColl is being developed as part of the Digital Tools for Manuscript Study project by the Old Books New Science Lab at the University of Toronto)

If you have a collection formula to work from its actually pretty easy to build a model for it in the Collation modeler.

MS 433 in the Penn Collation Modeler

Here is MS 433 again, in the context of the collation modeler. Here in the main manuscript page, I’ve listed out all the quires in the manuscript and you can see the number of leaves in each. Using the collation modeler I can generate multiple regular quires all at once and then modify them, or create quires one-at-a-time. Folio numbers are generated automatically, but if the manuscript is paginated I need to change the folio numbers to page numbers (formatted as two numbers separated by a dash); to make this easier I wrote a script to fix the numbering in the finished collation model rather than doing it in the modeler (pagination will be built into the system for the new data model).

MS 433 Quire 3 in the Penn Collation Modeler

Taking a look at Quire 3, we can see the list of leaves, the folio numbering (which again I can change – we can also renumber completely from any point in the manuscript, if for example the numbering skips a leaf or numbers repeat). We can also note the “mode” of a leaf – is it original to the manuscript, added, a replacement, or missing? Once the model is built in the collation modeler, I output the collation model, which is an XML file.

MS 433 output: diagrams, bifolia view, and collation formula

And then I processed this model using the existing scripts and got diagrams, bifolia view, and a collation formula (the script actually generates a set of formulas – we can generate as many different flavors of formula as we need). You will note that the formula isn’t exactly like the formula from the record.

MS 433 collation formula: comparing the James formula and the VisColl-generated formula

That’s because the first data model is very simple and doesn’t indicate advanced things like gaps or quire groupings (indicated by “gap” and || dividers in this formula). This is actually something that can be done in the second data model, so hopefully by the end of the summer we’ll be able to output something that looks more like the record formula, and also include that information with the diagrams and bifolia view.

Once I decided to forego processing the formulas, work progressed more quickly. I was able to get 21 formulas into the collation modeler within a few hours spread out over two days. In addition to referencing the formula I would also reference the folio or page numbering in the manuscript description (which describes when page numbers are missing, repeated, or otherwise inconsistent), and at times I would reference the image files too (although I did that to double-check numbering, not to seek out physical clues to collation).

Work slowed down again when I discovered while entering the data that in several cases the foliation or pagination given in the record didn’t agree with the numbering required by the given collation formula. In the Collation Modeler you specify which folio or page aligns with which leaf in a quire – there should be a 1:1 correspondence between foliated leaves or pairs of paginated pages and leaves listed in the collation model. The first time I noticed this, a manuscript with several regular quires of 8 ended up with 8 more leaves required by the formula than were accounted for in the manuscript description. I figure that the person who made the formula got caught up in the regular quires and just added an extra one to the count, so I was comfortable removing one quire of 8 from the model. It’s not always this clear, however.

A few examples of instances where foliation and collation don’t add up

I have a list of manuscripts that I was unable to make models for because of slight variations between numbers of leaves needed by the model and the number of pages or folios listed in the description. Somebody would need to sit down with the manuscripts to see if the problem is with the formula or the foliation or pagination. (This shows one of the positive side effects of the collation modeling approach that I didn’t consider when we started, it can be used as a tool in the catalogers toolkit to double-check both the collation and numbering to ensure they align).

I’ve created a website that links brief records to the Parker on the Web and the diagram/bifolia collation views, just for fun (and I’m afraid it’s not very pretty). But if you’d like to see that you can visit parkercollations.omeka.net.

Although the combined diagram/bifolia view is interesting on its own, I’m most interested in how it might be combined with the more traditional facing-page view to provide an alternative access/navigation to digitized manuscripts. I’m currently co-PI on  Bibliotheca Philadelphiensis (BiblioPhilly), a collaboration between the University of Pennsylvania, the Free Library of Philadelphia, Lehigh University, and the Philadelphia Area Consortium of Special Collections Libraries (PACSCL) and funded by the Council on Library and Information Resources. BiblioPhilly serves to digitize all the medieval manuscripts in Philadelphia written in Europe before 1600 (476 of them, not including several hundred already digitized at Penn). We have incorporated collation modeling into our cataloging workflow, and we are working with a software developer to build an interface that will make the collation information an integral part of the experience.

Here are some mock-ups that I made to pass along to the software developers so they can see what I’m thinking about, but there will be a certain amount of back and forth with them, and others in the project are involved in this, so I’m not really sure what we’ll come up with but I’m excited to see it. And the reason we can do this is because we have the data. Now we can build the processes.

There’s no reason not to incorporate collation views of some kind into the navigation options of the Parker on the Web and other IIIF collections. There would need to be a standard way to model the collation within IIIF manifests, and then add a plug-in to the IIIF image viewers that takes advantage of that new data in new and interesting ways.

I hope that our experience with integrating VisColl into BiblioPhilly from the beginning, and my experiments building models from the Parker formulas for this talk, will encourage Parker on the Web and other libraries to develop more experimental interfaces for their digitized manuscripts.

Addendum

During his presentation “The Durham Library Recreated project,” Dr. Richard Higgins from Durham University Library suggested using the Bodleian Library’s Manifest Editor, one tool in their Digital Manuscripts Toolkit, to rearrange images so they present as bifolia in the facing-page view. Here is a screenshot of the manifest for MS 433 with the first quire rearranged as bifolia:

This works on one level: if you paged through this in an interface using a Book View, you would be presented with the conjoin leaves as sheets. But it’s really just another flat list of images, presented one after the other, just in a different order than they are in the book (Edit on 3/20/2018: It’s come to my attention that this is the general approach used by the Electronic Beowulf 4.0, in that edition’s collation navigation, so if you want to try paging through manuscript images organized by bifolia you can do it there. Instructions are here; be sure to select manuscript for both sides or else it’s not possible to click the collation option). This approach doesn’t really express the structural, three-dimensional aspect of the manuscript’s collation, so it can’t be used to generate alternative views (like diagrams or formulas). I think that a manifest like this could, however, be another kind of output from a collation model, but I think for IIIF it would make more sense to make the model part of the manifest, or something standard that IIIF APIs combine with manifests, to create any number of collation-aware views. 

Dot’s Twitter Bots

I made some Twitter bots! It was mostly very easy.

The bots I made use Zach Whalen’s SSBot, documented in “How to Make A Twitter Bot with Google Spreadsheets version 0.4” which includes all the information you need about how to link your Twitter account to the spreadsheet and start the bot tweeting. The only thing I’ll note is that the Spreadsheet’s “Project Key” (asked for in Step 4) is depreciated; you’ll need to use the Script ID instead (it’s located directly under the Project Key in the Spreadsheet’s Project Properties).

Once you link the Twitter account to the SSBot, you enter data in the spreadsheet and that data is what gets tweeted.

Here’s a list of bots I made:

For all but WhyBeBot I generated a list of 140 line strings and pasted it into column one of the “Select from Columns” tab in the SSBot spreadsheet. This was really the most difficult and interesting part, because in each case I had to figure out how to download and process the texts. For example, for CollationBot I had to figure out how to pull out just the collation formulas from the records, while for the full-text bots I had to download the texts, find the sentences, and ideally find sentences that were less than 140 characters (if you pay attention you can see that these bots were created over time, and I got much better later on about including only complete sentences). Clearly most of these bots were made before Twitter increased to 280 characters; I may go back and lengthen the strings someday.

WhyBeBot is a bit different. It takes advantage of SSBot’s ability to mix content among columns. Instead of just one column, WhyBeBot has four columns. The first contains only “Why be ” while the third contains only “when you can be “, and the second and fourth both have a randomly-generated list of a few hundred adjectives.

There are many other ways to make Twitter bots (I know that a lot of people have had good luck with Cheap Bots Done Quick – I’ve never tried it, maybe someday). I would like to do more bots, the setup is pretty simple and getting the content situated is a fun challenge.

How to download images using IIIF manifests, Part II: Hacking the Vatican

Last week I posted on how to use a Firefox plugin called Down them All to download all the files from an e-codices IIIF manifest (there’s also a tutorial video on YouTube, one of a small but growing collection that will soon include a video outlining the process described here), but not all manifests include direct links to images. The manifests published by the Vatican Digital Library are a good example of this. The URLs in manifests don’t link directly to images; you need to add criteria at the end of the URLs to hit the images. What can you do in that case? In that case, what you need to do it build a list of urls pointing to images, then you can use Down Them All (or other tools) to download them.

In addition to Down Them All I like to use a combination of TextWrangler and a website called Multilinkr, which takes text URLs and turns them into hot links. Why this is important will become clear momentarily.

Let’s go!

First, make sure you have all the software you’ll need: Firefox, Down Them All, and TextWrangler.

Next, we need to pull all the base URLs out of the Vatican manifest.

  1. Search the Vatican Digital Library for the manuscript you want. Once you’ve found one, download the IIIF manifest (click the “Bibliographic Information” button on the far left, which opens a menu, then click on the IIIF manifest link)
    Vatican Digital Library.
    Vatican Digital Library.

    Screen Shot 2016-07-19 at 9.40.49 AM
    Viewing Bibliographic Information. IIIF manifest is on the bottom of the list
  2. Open the manifest you just downloaded in TextWrangler. When it opens, it will appear as a single long string:
    Manifest open in TextWrangler.
    Manifest open in TextWrangler.

    You need to get all the URLs on separate lines. The easiest way to do this is to find and replace all commas with a comma followed by a hard return. Do this using the “grep” option, using “\r” to add the return. Your find and replace box will look like this (don’t forget to check the “grep” box at the bottom!):

    Find and replace. Don't forget grep!
    Find and replace. Don’t forget grep!

    Your manifest will now look something like this:

    Manifest, now with returns!
    Manifest, now with returns!
  3. Now we’re going to search this file to find everything that starts with “http” and ends with “jp2” (what I’m calling the base URLs). We’ll use the “grep” function again, and a little regular expression that will match everything between the beginning of the URL and the end( .* ). Your Find window should look like this (again, don’t forget to check “grep”). Click “Find All”:
    Find the URLs that end with "jp2"
    Find the URLs that end with “jp2”

    Your results will appear in a new window, and will look something like this:

    Search results.
    Search results.
  4. Now we want to export these results as text, and then remove anything in the file that isn’t a URL. First, go to TextWrangler’s File menu and select “Export as Text”:
    Export as Text.
    Export as Text.

    Save that text file wherever you’d like. Then open it in TextWrangler. You now need to do some finding and replacing, using “grep” (again!) and the .* regular expression to remove anything that is not http…jp2. I had to do two runs to get everything, first the stuff before the URLs, then the stuff after:

    Before first search.
    Before first find and replace.
    After first search.
    After first find and replace.
    Before second find and replace.
    Before second find and replace.

    After second find and replace.
    After second find and replace.
  5. You will notice (I hope!) that there are forward slashes (\) before every backslash/regular slash (/) in the URLs. So we need to remove them too. Just to a regular find and replace, DO NOT check the “grep” box:
    Before the slash find and replace.
    Before the slash find and replace.
    After the slash find and replace.
    After the slash find and replace.

    Hooray! We have our list of base URLs. Now we need to add the criteria necessary to turn these base URLs into direct links to images.

    I keep mentioning the criteria required to turn these links from error-throwers to image files. If you go to the Vatican Digital Library website and mouse over the “Download” button for any image file, you’ll see what I mean. As you mouse that button over a bar will appear at the very bottom of your window, and if you look carefully you’ll see that the URL there is the base URL (ending in “jp2”) followed by four things separated by slashes:

    Check out the bits after "jp2" in the URL.
    Check out the bits after “jp2” in the URL.

    There is a detailed description of what exactly these mean in the IIIF Image API Documentation on the IIIF website, but basically:

    [baseurl]/region/size/rotation/quality

    So in this case, we have the full region (the entire image, not a piece of it), size 1047 pixels across by however tall (since there is nothing after the comma), rotation of 0 degrees, and a quality native (aka default, I think – one could also use bitonal or gray to get those quality of images). I like to get the “full” image size, so what I’m going to add to the end of the URLS is:

    [baseurl]/full/full/0/native.jpg

    We’ll just do this using another find and replace in TextWrangler.

  6. We’re just adding the additional criteria after the file extension, so all I do is find the file extension – jp2 – and replace all with “jp2/full/full/0/native.jpg”.
    Adding criteria: before find and replace.
    Adding criteria: before find and replace.
    Adding criteria: After find and replace.
    Adding criteria: After find and replace.

    Test one, just to make sure it works. Copy and paste the URL into a browser.

    Works for me.
    Works for me.
  7. Now – finally! promise! – you can use Down them All to download all those lovely image files. In order to do that you need to turn the text links into hot links. When I was testing this I first tried opening the text file in Firefox and pointing Down Them All to it, but it broke Down Them All – and I mean BROKE it. I had to uninstall Down Them All and delete everything out of my Firefox profile before I could get it to work again. Happily I found a tool that made it easy to turn those text links into hot links: Multilinkr. So now open a new tab in Firefox and open Multilinkr. Copy all the URLs from TextWrangler and paste them into the Multilinkr box. Click the “Links” button and gasp as the text links turn into hot links:
    Text links.
    Text links.
    Hot links *gasp*
    Hot links *gasp*

    Now go up to the Firefox “Tools” menu and select “Down Them All Tools > Down Them All” from the dropdown: Screen Shot 2016-07-21 at 1.51.15 PMDown Them All should automatically recognize all the files and highlight them. Two things to be careful about here. One is that you need to specify a download location. It will default to your Downloads folder, but I like to indicate a new folder using the shelfmark of the manuscript I’m downloading. You can also browse to download the files wherever you’d like. The second one is that Down Them All will keep file names the same unless you tell it to do something different. In the case of the Vatican that’s not ideal, since all the files are named “native.jpg”, so if you don’t do something with the “Renaming Mask” you’ll end up with native.jpg native.jpg(1) native.jpg(2) etc. I like to change the Renaming Mask from the default *name*.*ext* to *flatsubdirs*.*ext* – “flatsubdirs” stands for “flat subdirectories”, and it means the downloaded files will be named according to the path of subdirectories wherever they are being downloaded from. In the case of the Vatican files, a file that lives here:

    http://digi.vatlib.it/iiifimage/MSS_Vat.lat.3773/3396_0-AD0_f11fd975a99e2b099ee569f7667f8b8d0fd922dbc0cf5cd6730cda1a00626794_1469208118412_Vat.lat.3773_0003_pa_0002.jp2/full/full/0/native.jpg

    will be renamed

    iiifimage-MSS_Vat.lat.3773-3396_0-AD0_f11fd975a99e2b099ee569f7667f8b8d0fd922dbc0cf5cd6730cda1a00626794_1469208118412_Vat.lat.3773_0003_pa_0002.jp2-full-full-0.jpg

    This is still a mouthful, but both the shelfmark (Vat.lat.3773) and the page number or folio number are there (here it’s pa_0002.jp2 = page 2, in other manuscripts you’ll see for example fr_0003r.jp2), so it’s simple enough to use Automator or another tool to batch rename the files by removing all the other bits and just leaving the shelfmark and folio or page number.

There are other ways you could do this, too, using Excel to construct the URLs and wget to download, but I think the method outlined here is relatively simple for people who don’t have strong coding skills. Don’t hesitate to ask if you have trouble or questions about this! And please remember that the Vatican manuscript images are not licensed for reuse, so only download them for your own scholarly work.

How to download images using IIIF manifests, Part I: DownThemAll

IIIF manifests are great, but what if you want to work with digital images outside of a IIIF interface? There are a few different ways I’ve figured out that I can use IIIF manifests to download all the images from a manuscript. The exact approach will vary since different institutions construct their image URLs in different ways. Here’s the first approach, which is fairly straightforward and uses e-codices as an example. Tomorrow I’ll post a second post using on the Vatican Digital Library. Please remember that most institutions license their images, so don’t repost or publish images unless the institution specifically allows this in their license.

Method 1: The manifest has urls that resolve directly to image files

This is the easiest method, but it only works if the manifest contains urls that resolve directly to image files. If you can copy a url and paste it into a browser and an image displays, you can use this method. The manifests provided by e-codices follow this approach.

  1. Install DownThemAll, a Firefox browser plugin that allows you to download all the files linked to from a webpage. (There is a similar browser plugin for Chrome, called Get Them All, but it did not recognize the image files linked from the manifest)
  2. Go to e-codices, search for a manuscript, and click the “IIIF manifest” link on the Overview page.
    IIIF manifest link (look for the colorful IIIF logo)
    IIIF manifest link (look for the colorful IIIF logo)

    The manifest will open in the browser. It will look like a mess, but it doesn’t need to look good.

    Messy manifest.
    Messy manifest.
  3. Open DownThemAll. It will recognize all the files linked from the manifest (including .json files, .jpg, .j2, and anything else) and list them. Click the box next to “JPEG Images” at the bottom of the page (under “Filters”). It will highlight all the JPEG images in the list, including the various “default.jpg” images and files ending with “.jp2”

    Screen Shot 2016-07-14 at 4.46.47 PM
    JPEG images highlighted in Down Them All
  4. Now, we only want the images that are named “default.jpg”. These are the “regular” jpeg files; the .jp2 files are the masters and, although you could download them, your browser wouldn’t know what to do with them. So we need to create a new filter so we get only the default.jpg files. To do this, first click “Preferences” in the lower right-hand corner, then click the “Filters” button in the resulting window.
    Filters.
    Filters.

    There they are. To create a new filter, click the “Add New Filter” button, and call the new filter “Default Jpg” (or whatever you like). In the Filtered Extensions field, type “/\/default.jpg” – the filter will select only those files that end with “default.jpg” (yes you do need three slashes there!). Note that you do not need to press save or anything, the filter list updates and saves automatically.

    New filter
    New filter
  5. Return to the main Down Them All view and check the box next to your newly-created filter. Be amazed as all the “default.jpg” files are highlighted.

    AMAZE
    AMAZE
  6. Don’t hit download just yet. If you do, it will download all the files with their given names, and since they are all named “default.jpg” it won’t end well. It will also download them all directly to whatever is specified under “Save Files in” (in my case, my Downloads folder) which also may not be ideal. So you need to change the Renaming Mask to at least give you unique names for each one, and specify where to download all those files. In the case of e-codices the manifest urls include both the manuscript shelfmark and the folio number for each image, so let’s use the Renaming Mask to name the files according to the file page. Simply change *name* to *flatsubdirs* (flat subdirectories). Under “Save Files in”, browse to wherever you want to download all these files.

    Renaming Mask and Save Files in, read to go
    Renaming Mask and Save Files in, read to go
  7. Press “Start” and wait for everything to download.
    Downloading...
    Downloading…

    Congratulations, you have downloaded all the images from this manuscript! You’ll probably want to rename them (if you’re on Mac you can use Automator to do this fairly easily), and you should also save the manifest alongside the images.

    TOMORROW: THE VATICAN!

UPenn’s Schoenberg Manuscripts, now in PDF

Hi everyone! It’s been almost a year since my last blog post (in which I promised to post more frequently, haha) so I guess it’s time for another one. I actually have something pretty interesting to report!

Last week I gave an invited talk at the Cultural Heritage at Scale symposium at Vanderbilt University. It was amazing. I spoke on OPenn: Primary Digital Resources Available to Everyone, which is the platform we use in the Schoenberg Institute for Manuscript Studies at the University of Pennsylvania Libraries to publish high-resolution digital images and accompanying metadata for all our medieval manuscripts (I also talked for a few minutes about the Schoenberg Database of Manuscripts, which is a provenance database of pre-1600 manuscripts). The philosophy of OPenn is centered on openness: all our manuscript images are in the public domain and our metadata is licensed with Creative Commons licenses, and none of those licenses prohibit commercial use. Next to openness, we embrace simplicity. There is no search facility or fancy interface to the data. The images and metadata files are on a file system (similar to the file systems on your own computer) and browse pages for each manuscript are presented in HTML that is processed directly from the metadata file. (Metadata files are in TEI/XML using the manuscript description element)

Screen Shot 2016-06-10 at 2.20.26 PM

This approach is actually pretty novel. Librarians and faculty scholars alike love their interfaces! And, indeed, after my talk someone came up to me and said, “I’m a humanities faculty member, and I don’t want to have to download files. I just want to see the manuscripts. So why don’t you make them available as PDF so I can use them like that?”

This gave me the opportunity to talk about what OPenn is, and what it isn’t (something I didn’t have time to do in my talk). The humanities scholar who just wants to look at manuscripts is really not the audience for OPenn. If you want to search for and page through manuscripts, you can do that on Penn in Hand, our longstanding page-turning interface. OPenn is about data, and it’s about access. It isn’t for people who want to look at manuscripts, it’s for people who want to build things with manuscript data. So it wouldn’t make sense for us to have PDFs on OPenn – that’s just not what it’s for.

Landing page for Penn in Hand.
Landing page for Penn in Hand.

HOWEVER. However. I’m sympathetic. Many, many people want to look at manuscripts, and PDFs are convenient, and I want to encourage them to see our manuscripts as available to them! So, even if Penn isn’t going to make PDFs available institutionally (at least, not yet – we may in the future), maybe this is something I could do myself. And since all our manuscript data is available on OPenn and licensed for reuse, there is no reason for me not to do it.

So here they are.

If you click that link, you’ll find yourself in a Google Drive folder titled “OPenn manuscript PDFs”. In there is currently one folder, “LJS Manuscripts.” In that folder you’ll fine a link to a Google spreadsheet and over 400 PDF files. The spreadsheet lists all the LJS manuscripts (LJS = Laurence J. Schoenberg, who gifted his manuscripts to Penn in 2012) including catalog descriptions, origin dates, origin locations, and shelfmarks. Let’s say you’re interested in manuscripts from France. You can highlight the Origin column and do a “Find” for “France.” It’s not a fancy search so you’ll have to write down the shelfmarks of the manuscripts as you find them, but it works. Once you know the shelfmarks, go back into the “LJS Manuscripts” folder and find and download the PDF files you want. Note that some manuscripts may have two PDF files, one with “_extra” in the file name. These are images that are included on OPenn but not part of the front-to-back digitization of a manuscript. They might include things like extra shots of the binding, or reference shots.

If you are interested in knowing how I did this, please read on. If not, enjoy the PDFs!

Screen Shot 2016-06-10 at 2.24.32 PM

How I did it

I’ll be honest, this is my favorite part of the exercise so thank you for sticking with me for it! There won’t be a pop quiz at the end although if you want to try this out yourself you are most welcome to.

First I downloaded all the web jpeg files from the LJS collection on OPenn. I used wget to do this, because with wget I am able to get only the web jpeg files from all the collection folders at once. My wget command looked like this:

wget -r -np -A “_web.jpg” http://openn.library.upenn.edu/Data/0001/

Brief translation:

wget = use the wget program
-r = “recursive”, basically means go into all the child folders, not just the folder I’m pointing to
-np = “no parent”, basically means don’t go into the parent folders, no matter what
-A “_web.jpg” = “accept list”, in this case I specified that I only want those files that contain _web.jpg (which all the web jpeg files on OPenn do)
http://openn.library.upenn.edu/Data/0001/ = where all the LJS manuscript data lives

I didn’t use the -nd command, which I usually do (-nd = “no directory”, if you don’t use this command you get the entire file structure for the file server starting from root, which in this case is openn.library.upenn.edu. What this means, practically, is that if you use wget to download one file from a directory five levels up, you get empty folders four levels deep then the top director with one file in it. Not fun. But in this case it’s helpful, and you’ll see why later.

At my house, with a pretty good wireless connection, it took about 5 hours to download everything.

I used Automator to batch create the PDF files. After a bit of googling I found this post on batch creating multipage PDF files from jpeg files. There are some different suggestions, but I opted to use Mac’s Automator. There is a workflow linked from that post. I downloaded that and (because all of the folders of jpeg images I was going to process are in different parent folders) I replaced the first step in the workflow, which was Get Selected Finder Items, with Get Specified Finder Items. This allowed me to search in Automator for exactly what I wanted. So I added all the folders called “web” that were located in the ancestor folder “openn.library.upenn.edu” (which was created when I downloaded all the images from OPenn in the previous step). In this step Automator creates one PDF file named “output.pdf” for each manuscript in the same location as that manuscript’s web jpeg images (in a folder called web – which is important to know later).

Once I created the PDFs, I no longer needed the web jpeg files. So I took some time to delete all the web jpegs. I did this by searching in Finder for “_web.jpg” in openn.library.upenn.edu and then sending them all to Trash. This took ages, but when it was done the only thing in those folders were output.pdf files.

I still had more work to do. I needed to change the names of the PDF files so I would know which manuscripts they represented. Again, after a bit of Googling, I chanced upon this post which includes an AppleScript that did exactly what I needed: it renames files according to the path of their location on the file system. For example, the file “output.pdf” located in Macintosh HD/Users/dorp/Downloads/openn/openn.library.upenn.edu/Data/0001/ljs101/data/web would be renamed “Macintosh HD_Users_dorp_Downloads_openn_openn.library.upenn.edu_Data_0001_ljs101_data_web_001.pdf”. I’d never used AppleScript before so I had to figure that out, but once I did it was smooth sailing – just took a while. (To run the script I copied it into Apple’s Script Editor, hit the play button, and selected openn.library.upenn.edu/Data/0001 when it asked me where I wanted to point the script)

Finally, I had to remove all the extraneous pieces of the file names to leave just the shelfmark (or shelfmark + “extra” for those files that represent the extra images). Automator to the rescue again!

  1. Get Specified Finder Items (adding all PDF files located in the ancestor folder “http://openn.library.upenn.edu”)
  2. Rename Finder Items to replace text (replacing “Macintosh HD_Users_dorp_Downloads_openn_openn.library.upenn.edu_Data_0001_” with nothing) –
  3. Rename Finder Items to replace text (replacing “_data_web_001” with nothing)
  4. Rename Finder Items to replace text (replacing “_data_extra_web_001” with “_extra” – this identifies PDFs that are for “extra” images)

The last thing I had to do was to move them into Google Docs. Again, I just searched for “.pdf” in Finder (just taking those that are in openn.libraries.upenn.edu/Data/0001) and dragged them into Google Drive.

All done!

The spreadsheet I generated by running an XSLT script over the TEI manuscript descriptions (it’s a spreadsheet I created a couple of years ago when I first uploaded data about the Penn manuscripts on Viewshare. Leave a comment or send me a note if that sounds interesting and I’ll make a post on it.