“What is an edition anyway?” My Keynote for the Digital Scholarly Editions as Interfaces conference, University of Graz

Tags

,

This week I presented this talk as the opening keynote for the Digital Scholarly Editions as Interfaces conference at the University of Graz. The conference is hosted by the Centre for Information Modelling, Graz University, the programme chair is Georg Vogeler, Professor of Digital Humanities and the program is endorsed by Dixit – Scholarly Editions Initial Training Network. Thanks so much to Georg for inviting me! And thanks to the audience for the discussion after. I can’t wait for the rest of the conference.

What is an edition anyway?

Thank you to Georg Voegler for inviting me to present the keynote at the symposium, thank you Dixit for making this conference possible, and danke to welcoming speakers for welcoming us so warmly. I’m excited to be here and looking forward to hear what the speakers have to say about digital scholarly editions as interfaces. Georg invited me here to talk about my work on medievalists use of digital editions. But first, I have a question.

What is an edition? I think we all know what an edition is, but it’s still fun, I find, to investigate the different ways that people define edition, or think about editions, so despite the title of this talk, most of what I’m going to be talking about is various ways that people think about editions and why that matters to those of us in the room who spend our time building editions, and at the end I’m going to share my thoughts on directions I’d like to see medieval editions in particular take in the future.

screen-shot-2016-09-21-at-12-07-15-pm

I’ll admit that when I need a quick answer to a question, often the first place I turn to is Google. Preparing for this talk was no different. So, I asked Google to define edition for me, and this is what I got. No big surprise. Two definitions, the first “a particular form or version of a published text,” and the second “the total number of copies of a book, newspaper, or other published material issued at one time.” The first definition here is one that’s close to the way I would answer this question myself. I think I’d generally say that an edition is a particular version of a text. It might be a version compiled together from other versions, like in a scholarly critical edition, but need not be. I’m a medievalist, so this got me thinking about texts written over time, and what might make a text rise to the level of being an “edition”, or not.

screen-shot-2016-09-19-at-10-48-59-am

Bankes Papyrus, British Museum Papyrus 114.

So here is some text from the Illiad, written on a papyrus scroll I the 2nd century BC. The scroll is owned by the British Museum, Papyrus 114 also known as the Bankes Papyrus. The Illiad, you probably know, is an ancient Greek epic poem set during the Trojan war, which focuses on a series of battles between King Agamemnon and the warrior Achilles. If you are a Classicist, I apologize in advance for simplifying a complex textual situation. If you aren’t a Classicist, if you’ve read the Illiad you probably read it in a translation from Greek into your native language, and this text most likely would have been presented to you as “The Text of The Illiad” – that is, a single text. That text, however, is built from many small material fragments that were written over a thousand years, and which represent written form of text that was composed through oral performance. The Bankes Papyrus is actually one of the most complete examples of the Illiad in papyrus form – most surviving examples are much more fragmentary than this.

Venetus A, aka Marcianus Graecus Z. 454 [=822] (ca. 950), fol. 12r

Venetus A, aka Marcianus Graecus Z. 454 [=822] (ca. 950), fol. 12r

As far as we know the text of the Illiad was only compiled into a single version in the 10th century, in what is known as the Venetus A manuscript, now at the Marciana Library in Venice. I have an image of the first page of the first book of the Illiad here. You can see that this presents much more than the text, which is the largest writing in the center-left of the page. This compiled text is surrounded by layers of glossing, which includes commentary as well as textual variants.

UPenn Ms Codex 1058, fol. 12r.

UPenn Ms Codex 1058, fol. 12r.

The Venetus A is just one example of a medieval glossed manuscript. Another more common genre of glossed manuscripts are Glossed Psalters, that is, texts of the Psalter written with glosses, quotes from the Church Fathers, included to comment on specific lines. Here is an example of a Glossed Psalter from the University of Pennsylvania’s collection. This is a somewhat early example, dated to around 1100, which is before the Glossa Ordinara was compiled (the Glossa Ordinaria was the standard commentary on the scriptures into the 14th century). Although this isn’t as complex as the Venetus A, you can still see at least two levels of glossing, both in the text and around the margins.

UPenn Ms. Codex 1640, fol. 114r.

UPenn Ms. Codex 1640, fol. 114r.

One more example, a manipulus florum text from another University of Pennsylvania manuscript. Thomas of Ireland’s Manipulus florum (“Handful of flowers”), compiled in the early 14th century, belongs to the genre of medieval texts known as florilegia, collections of authoritative quotations that are the forerunners of modern reference works such as Bartlett’s Familiar Quotations and The Oxford Dictionary of Quotations. This particular florilegium contains approximately 6000 Latin proverbs and textual excerpts attributed to a variety of classical, patristic and medieval authors. The flora are organized under alphabetically-ordered topics; here we see magister, or teacher. The red text is citation information, and the brown text is the quotes.

marsden

Now let’s take a look at a modern edition, Richard Marsden’s 2008 edition of The Old English Heptateuch published with the Early English Text Society. A glance at the table of contents reveals an introduction with various sections describing the history of editions of the text, the methodology behind this edition, and a description of the manuscripts and the relationships among them. This is followed by the edited texts themselves, which are presented in the traditional manner: with “the text” at the top of the page, and variant readings and other notes – the apparatus – at the bottom. In this way you can both read the text the editor has decided is “the text”, but also check to see how individual manuscripts differ in their readings. It is, I’ll point out, very similar to the presentation of the Illiad text in the Venetus A.

screen-shot-2016-09-21-at-12-22-22-pm

Electronic and digital editions have traditionally (as far as we can talk about there being a tradition of these types of editions) presented the same type of information as print editions, although the expansiveness of hypertext has allowed us to present this information interactively, selecting only what we want to see at any given moment and enabling us to follow trails of information via links and pop-ups. For example I have here Prue Shaw’s edition of Dante’s Commedia, published by the Scholarly Digital Editions. Here we have a basic table of contents, which informs us of the sections included in the edition.

screen-shot-2016-09-21-at-12-26-08-pm

Here we have the edited text from one manuscript, with the page image displayed alongside (this of course being one of the main differences between digital and print editions), with variant readings and other notes available at the click of the mouse.

screen-shot-2016-09-21-at-12-29-37-pm

A more extensive content list is also available via dropdown, and with another click I can be anywhere in the edition I wish to be.

screen-shot-2016-09-21-at-12-30-41-pm

Here I am at the same point in the text, except the base text is now this early printed edition, and again the page image is here displayed so I can double-check the editor’s choices should I wish to.

With the possible exception of the Bankes Papyrus, all of these examples are editions. They reflect the purpose of the editor, someone who is not writing original text but is compiling existing text to suit some present desire or need. The only difference being the material through which the edition is presented – handwritten parchment or papyrus, usually considered “primary material”, vs. a printed book or digital media, or “secondary material”. And I could even make an argument that the papyrus is an edition as well, if I posit that the individual who wrote the text on the papyrus was compiling it from some other written source or even from the oral tradition.


I want to take a step back now from the question of what is an edition and talk a bit about why, although the answer to this may not matter to me personally, it does matter very much when you start asking people their opinions about editions. (I am not generally a fan of labels and prefer to let things be whatever they are without worrying too much about what I should call them. I’m no fun at parties.)

I’ve been studying the attitudes of medievalists towards digital resources, including editions, since I was a library science graduate student back in 2002. In May 2001 I graduated with an MA from the Medieval Institute at Western Michigan University, with a focus on Anglo-Saxon language, literature, and religious culture. I had taken a traditional course of work, including courses in paleography and codicology, Old English, Middle English, and Latin language and literature, and several courses on the reading of religious texts, primarily hagiographical texts. I was keenly aware of the importance of primary source materials to the study of the middle ages, and I was also aware that there were CD-ROMs available that made primary materials, and scholarly editions of them, available at the fingertips. There were even at this time the first online collections of medieval manuscripts (notably the Early Medieval Manuscript Collection at the Bodleian Library at Oxford). But I was curious about how much these new electronic editions (and electronic journals and databases, too) were actually being used by scholars. I conducted a survey of medievalists, asking them about their attitudes toward, and use of, electronic resources. I wrote my findings in a research paper, “Medievalists’ Use of Electronic Resources: The Results of a National Survey of Faculty Members in Medieval Studies,” which is still available if you want to read it, in the IU Bloomington institutional repository.

I conducted a second survey in 2011, and compared findings from the two surveys in an article published in 2013 in Scholarly Editing, “Medievalists and the Scholarly Digital Edition.” The methodologies for these surveys were quite different (the first was mailed to a preselected group of respondents, while the second was sent to a group but also advertised on listservs and social media), and I’m hesitant to call either of them scientific, but with these caveats they do show a general trend of usage in the 9 years between, and this trend reflects what I have seen anecdotally.

2002

In this chart from 2002, we see that 7% of respondents reported using electronic and print editions the same, 44% print mostly, and 48% print only.

2009

Nine years later, while still no-one reports using only electronic editions, 7% report using electronic mostly, 12% electronic and print the same, 58% print mostly, and 22% print only. The largest shift is from “print only” to “print mostly”, and it’s most clearly seen on this chart.

screen-shot-2016-09-23-at-12-19-31-pm

Now this is all well and good, and you’d be forgiven for looking at this chart and coming to the conclusion that all these folks had finally “seen the light” and were hopping online and on CD Rom to check out the latest high-tech digital editions in their field. But the written comments show that this is clearly not the case, at least not for all respondents, and that any issues with the survey data come from a disconnect between how I conceive of a “digital edition” and how the respondents conceive of the same.

screen-shot-2016-09-23-at-12-20-20-pm

Exhibit A: Comments from four different respondents explaining when they use digital editions and how they find them useful. I won’t read these to you, but I will point out that the phrase Google Books has been bolded in three of them, and while the other one doesn’t mention Google Books by name, the description strongly implies it.

I have thought about this specific disconnect a lot in the past five years, because I think that it does reflect a general disconnect between how we who create digital editions think about editing and editions, and how more traditional scholars and those who consume editions think about them. Out of curiosity, as I was working on this lecture I asked on Facebook for my “friends” to give me their own favorite definition of edition (not digital edition, just edition), and here are two that reflected the general consensus. The first is very material, a bibliographic description that would be favored by early modernists (as a medievalist I was actually a bit shocked by this definition, although I know what an edition is, bibliographically speaking, I wasn’t thinking in that direction at that point, I was really thinking of a “textual edition”), while the second focuses not so much on how the text was edited but on the apparatus that comes along with it. Thus, an edited text by itself isn’t properly an edition, it requires material explaining the text to be a “real” edition. Interestingly, this second definition arguably includes the Venetus A manuscript we looked at earlier.

This spring, in preparation for this lecture, I created a new survey, based on the earlier surveys (which were more or less identical) but taking as a starting place Patrick Sahle’s definition of a Digital Scholarly Edition:

Digital scholarly editions are not just scholarly editions in digital media. I distinguish between digital and digitized. A digitized print edition is not a “digital edition” in the strict sense used here. A digital edition can not be printed without a loss of information and/or functionality. The digital edition is guided by a different paradigm. If the paradigm of an edition is limited to the two-dimensional space of the “page” and to typographic means of information representation, than it’s not a digital edition.

In this definition Sahle differentiates between a digital edition, which essentially isn’t limited by typography and thus can’t be printed, and a digitized edition, which is and which can. In practice most digitized editions will be photographic copies of print editions, although of course they could just be very simple text rendered fully in HTML pages with no links or pop-ups. While the results of these lines of questioning aren’t directly comparable with the 2002 and 2011 results, I think it’s possible to see a general continuing trend towards a use of digitized editions, if not towards digital editions following Sahle’s definition.

First, a word about methodology. This year’s respondents were entirely self-selecting, and the announcement of the survey, which was online, went out through social media and listservs. I didn’t have a separate selected group. There were 337 total respondents although not every respondent answered every question.

digital

This year, I asked respondents about their use of editions – digital, digitized, and print – over the past year, focusing on the general number of times they had used the editions. Over 90% of respondents report using digital editions at all, although only just over 40% report using them “more times than I can count”.

screen-shot-2016-09-21-at-12-02-58-pm

When asked about digitized editions, however, over 75% report using them “more times than I can count”, and only 2 respondents – .6% – report using them not at all.

screen-shot-2016-09-21-at-12-05-01-pm

Print edition usage is similar to digitized edition usage, with about 78% reporting they use them “more times than I can count” and no respondents reporting they use them not at all. A chart comparing the three types of editions side-by-side shows clearly how similar numbers are for digitized and print editions vs. digital editions.

What can we make of this? Questions that come immediately to my mind include: are we building the editions that scholars need? That they will find useful? Are there editions that people want that aren’t getting made? But also: Does it matter? If we are creating our editions as a scholarly exercise, for our own purposes, does it matter if other people use them or not? It might hurt to think that someone is downloading a 19th century edition from Google Books instead of using my new one, but is it okay? And if it’s not, what can we do about that? (I’m not going to try to answer that, but maybe we can think about it this week)


I want to change gears and come back now to this question, what is an edition. I’ve talked a bit about how I conceive of editions, and how others do, and how if I’m going to have a productive conversation about editions with someone (or ask people questions on a survey) it’s important to make sure we’re on the same page – or at least in the same book – regarding what we mean when we say “edition”. But now I want to take a step back – way back – and think about what an edition is at the most basic level. On the Platonic level. If an edition is a shadow on the wall, what is casting that shadow? Some people will say “the urtext” which I think of (not unkindly, I assure you) as the floating text, the text in the sky. The text that never existed until some editor got her hands on it and brought it to life as Viktor Frankenstein brought to life that poor, wretched monster in the pages of Mary Shelley’s classic horror story. I say, we know texts because someone cared enough to write them down, and some of that survives, so what we have now is a written record that is intimately connected to material objects: text doesn’t float, text is ink on skin and ink on paper and notches in stone, paint on stone, and whatever else borne on whatever material was handy. So perhaps we can posit editions that are cast from manuscripts and the other physical objects on which text is borne, not simply being displayed alongside text, or pointed to from text, or described in a section “about the manuscript”, but flipping the model and organizing the edition according to the physical object.

I didn’t come up with this idea, I am sad to say. In 2015, Christoph Flüeler presented a talk at the International Congress on Medieval Studies titled “Digital Manuscripts as Critical Edition,” later posted to the Schoenberg Institute for Manuscript Studies blog. In this essay Flüeler asks: “… how [does] a digital manuscript [stand] in relation to a critical edition of a text. Can the publication of a digital manuscript on the internet be understood as an edition? Further: could such an edition even be regarded as a critical edition?” – His answer being, of course, yes. I won’t go into his arguments, instead I’m going to use them as a jumping-off point, but I encourage you to read his essay.

This concept is very appealing to me. I suppose I should admit now, almost at the end of my keynote, that I am not presently doing any textual editing, and I haven’t in a few years. My current position is “Curator, Digital Research Services” in the Schoenberg Institute for Manuscript Studies at the University of Pennsylvania Libraries in Philadelphia. This position is a great deal of fun and encompasses many different responsibilities. I am involved in the digitization efforts of the unit and I’m currently co-PI of Bibliotheca Philadelphiensis, a grant funded project that will digitize all the medieval manuscripts in Philadelphia, which I can only mention now but I’ll be glad to talk about more later to anyone interested in hearing about it. All our digital images are released in the public domain, and published openly on our website, OPenn, along with human readable HTML descriptions, links to download the images, and robust TEI manuscript descriptions available for download and reuse.

I also do a fair amount of what I think of as experimental work, including new ways to make manuscripts available to scholars and the public. I’ve created electronic facsimiles in the epub format, a project currently being expanded by the Penn Libraries metadata group, which are published in our institutional repository, and I also make short video orientations to our manuscripts which are posted on YouTube and also made available through the repository. In the spring I presented on OPenn for a mixed group of librarians and faculty at Vanderbilt University in Tennessee, after which an art historian said to me, “this open data thing is great and all, but why can’t we just have the manuscripts as PDFs?” So I held my nose and generated PDF files for all our manuscripts, then I did it for the Walters Art Museum as well for good measure. I posted them all to Google Docs, along with spreadsheets as a very basic search facility.

Collation visualization via VisColl

Collation visualization via VisColl

I’ve also been working for the past few years on developing a system for modeling and visualizing the physical collation of medieval manuscripts (this is distinct from textual collation which involves comparing versions of texts). With a bit of funding from the Mellon Foundation and the collaboration of Alexandra Gillespie and her team at the University of Toronto I am very excited for the next version of that system, which we call VisColl (it is on GitHub if like to check it out – you can see the code and there are instructions for creating your own models and visualizations). The next version will include facilities for connecting tags, and perhaps transcriptions, to the deconstructed manuscript. I hadn’t thought of the thing that this system generates as an edition, but perhaps it is. But instead of being an edition of a text, you might think of it as an edition of a manuscript that happens to have text on it (or sometimes, perhaps, won’t).

I am aware that I’m reaching the end of my time, so I just want to take a few minutes to mention something that I see playing an enormous role in the future of digital-manuscripts-as-editions, and that’s the International Image Interoperability Framework, or IIIF. I think Jeffrey Witt may mention IIIF in his presentation tomorrow, and perhaps others will as well although I don’t see any IIIF-specific papers in the schedule. At the risk of simplifying it, IIIF is a set of Application Programming Interfaces (APIs) – sets of routines, protocols, and tools – to enable the interoperability of image repositories. This means you can use images from different repositories in the same browser or other tool. Here, quickly, is an example of how that can work.

screen-shot-2016-09-21-at-5-25-06-pm

e-codices publishes links to IIIF manifests for each of their manuscripts. A manifest is a json file that contains descriptive and structural metadata for a manuscript, including links to images that are served through a IIIF server. You can look at it. It is human readable, kind of, but it’s a mess.

Two e-codices manuscripts (and others) in Mirador.

Two e-codices manuscripts (and others) in Mirador.

However, if you copy that link and paste it into a IIIF-conformant tool such as Mirador (a simple IIIF browser which I have installed on my laptop) you can create your own collection and then view and manipulate the images side-by-side. Here I’ve pulled in two manuscripts from e-codices, both copies of the Roman de la Rose.

screen-shot-2016-09-21-at-5-38-49-pm

And here I can view them side by side, I can compare the images, compare the text, and I can make annotations on them too. Here is tool for creating editions of manuscripts.

(A quick side note: of course there are other tools that offer image tagging ability, including the DM project at SIMS, but what IIIF offers is not a single tool but a system for building and viewing editions, and all sorts of other unnamable things, using manuscripts in different institutions without having the move the images around. I cannot stress how radical this is for medieval manuscript studies.)

However as fond as I am of IIIF, and as promising I think it is for my future vision, my support for it comes with some caveats. If you don’t know, I am a huge proponent of open data, particularly open manuscript data. The Director of the Schoenberg Institute is Will Noel, an open data pioneer in his own right who has been named a White House Champion of Change, and I take him as my example. I believe that in most cases, when institutions digitize their manuscript collections they are obligated to release those images into the public domain, or at the very least under a creative commons: by license (to be clear, a license that would allow commercial use) and that manuscript metadata should be licensed for reuse. My issue with IIIF is that is presents the illusion of openness without actual openness. That is, if images are published under a closed license, if you have the IIIF manifest you can use them to do whatever you want, as long as you’re doing it through IIIF-compliant software. You can’t download them and use them outside of the system (to, say, generate PDF or epub facsimiles, or collation visualizations). I love IIIF for what it makes possible but I also think it’s vital to keep data open so people can use it outside of any given system.

DATA OVER INTERFACE

DATA OVER INTERFACE

We have a saying around the Schoenberg Institute, Data Over Interface. It was introduced to us by Doug Emery, our data programmer who was also responsible for the curation of the data of the Archimedes Palimpsest Project and the Walters Art Museum manuscripts. We like it so much we had it put on teeshirts (You can order your own here!). I like it, not because I necessarily agree that the data is always more important than the interface, but because it makes me think about whether or not the data is always more important than the interface. Excellent, robust data with no interface isn’t easily usable (although a creative person will always find a way), but an excellent interface with terrible data or no data at all is useless as anything other than a show piece. And then inevitably my mind turns to manuscripts, and I begin to wonder, in the case of a manuscript, what is the data and what is the interface? Is a manuscript simply an interface for the text and whatever else it bears, or is the physical object data of its own that begs for an interface to present it, to pull it apart and put it back together in some way to help us make sense of it or the time it was created? Is it both? Is it neither?

I am so excited to be here and to hear about what everyone in this room is thinking about editions, and interfaces, and what editions are, and what interfaces are and are for. Thank you so much for your time, and enjoy the conference.

Presented September 23, 2016.

How to download images using IIIF manifests, Part II: Hacking the Vatican

Tags

, , ,

Last week I posted on how to use a Firefox plugin called Down them All to download all the files from an e-codices IIIF manifest (there’s also a tutorial video on YouTube, one of a small but growing collection that will soon include a video outlining the process described here), but not all manifests include direct links to images. The manifests published by the Vatican Digital Library are a good example of this. The URLs in manifests don’t link directly to images; you need to add criteria at the end of the URLs to hit the images. What can you do in that case? In that case, what you need to do it build a list of urls pointing to images, then you can use Down Them All (or other tools) to download them.

In addition to Down Them All I like to use a combination of TextWrangler and a website called Multilinkr, which takes text URLs and turns them into hot links. Why this is important will become clear momentarily.

Let’s go!

First, make sure you have all the software you’ll need: Firefox, Down Them All, and TextWrangler.

Next, we need to pull all the base URLs out of the Vatican manifest.

  1. Search the Vatican Digital Library for the manuscript you want. Once you’ve found one, download the IIIF manifest (click the “Bibliographic Information” button on the far left, which opens a menu, then click on the IIIF manifest link)
    Vatican Digital Library.

    Vatican Digital Library.

    Screen Shot 2016-07-19 at 9.40.49 AM

    Viewing Bibliographic Information. IIIF manifest is on the bottom of the list

  2. Open the manifest you just downloaded in TextWrangler. When it opens, it will appear as a single long string:
    Manifest open in TextWrangler.

    Manifest open in TextWrangler.

    You need to get all the URLs on separate lines. The easiest way to do this is to find and replace all commas with a comma followed by a hard return. Do this using the “grep” option, using “\r” to add the return. Your find and replace box will look like this (don’t forget to check the “grep” box at the bottom!):

    Find and replace. Don't forget grep!

    Find and replace. Don’t forget grep!

    Your manifest will now look something like this:

    Manifest, now with returns!

    Manifest, now with returns!

  3. Now we’re going to search this file to find everything that starts with “http” and ends with “jp2” (what I’m calling the base URLs). We’ll use the “grep” function again, and a little regular expression that will match everything between the beginning of the URL and the end( .* ). Your Find window should look like this (again, don’t forget to check “grep”). Click “Find All”:
    Find the URLs that end with "jp2"

    Find the URLs that end with “jp2”

    Your results will appear in a new window, and will look something like this:

    Search results.

    Search results.

  4. Now we want to export these results as text, and then remove anything in the file that isn’t a URL. First, go to TextWrangler’s File menu and select “Export as Text”:
    Export as Text.

    Export as Text.

    Save that text file wherever you’d like. Then open it in TextWrangler. You now need to do some finding and replacing, using “grep” (again!) and the .* regular expression to remove anything that is not http…jp2. I had to do two runs to get everything, first the stuff before the URLs, then the stuff after:

    Before first search.

    Before first find and replace.

    After first search.

    After first find and replace.

    Before second find and replace.

    Before second find and replace.

    After second find and replace.

    After second find and replace.

  5. You will notice (I hope!) that there are forward slashes (\) before every backslash/regular slash (/) in the URLs. So we need to remove them too. Just to a regular find and replace, DO NOT check the “grep” box:
    Before the slash find and replace.

    Before the slash find and replace.

    After the slash find and replace.

    After the slash find and replace.

    Hooray! We have our list of base URLs. Now we need to add the criteria necessary to turn these base URLs into direct links to images.

    I keep mentioning the criteria required to turn these links from error-throwers to image files. If you go to the Vatican Digital Library website and mouse over the “Download” button for any image file, you’ll see what I mean. As you mouse that button over a bar will appear at the very bottom of your window, and if you look carefully you’ll see that the URL there is the base URL (ending in “jp2”) followed by four things separated by slashes:

    Check out the bits after "jp2" in the URL.

    Check out the bits after “jp2” in the URL.

    There is a detailed description of what exactly these mean in the IIIF Image API Documentation on the IIIF website, but basically:

    [baseurl]/region/size/rotation/quality

    So in this case, we have the full region (the entire image, not a piece of it), size 1047 pixels across by however tall (since there is nothing after the comma), rotation of 0 degrees, and a quality native (aka default, I think – one could also use bitonal or gray to get those quality of images). I like to get the “full” image size, so what I’m going to add to the end of the URLS is:

    [baseurl]/full/full/0/native.jpg

    We’ll just do this using another find and replace in TextWrangler.

  6. We’re just adding the additional criteria after the file extension, so all I do is find the file extension – jp2 – and replace all with “jp2/full/full/0/native.jpg”.
    Adding criteria: before find and replace.

    Adding criteria: before find and replace.

    Adding criteria: After find and replace.

    Adding criteria: After find and replace.

    Test one, just to make sure it works. Copy and paste the URL into a browser.

    Works for me.

    Works for me.

  7. Now – finally! promise! – you can use Down them All to download all those lovely image files. In order to do that you need to turn the text links into hot links. When I was testing this I first tried opening the text file in Firefox and pointing Down Them All to it, but it broke Down Them All – and I mean BROKE it. I had to uninstall Down Them All and delete everything out of my Firefox profile before I could get it to work again. Happily I found a tool that made it easy to turn those text links into hot links: Multilinkr. So now open a new tab in Firefox and open Multilinkr. Copy all the URLs from TextWrangler and paste them into the Multilinkr box. Click the “Links” button and gasp as the text links turn into hot links:
    Text links.

    Text links.

    Hot links *gasp*

    Hot links *gasp*

    Now go up to the Firefox “Tools” menu and select “Down Them All Tools > Down Them All” from the dropdown: Screen Shot 2016-07-21 at 1.51.15 PMDown Them All should automatically recognize all the files and highlight them. Two things to be careful about here. One is that you need to specify a download location. It will default to your Downloads folder, but I like to indicate a new folder using the shelfmark of the manuscript I’m downloading. You can also browse to download the files wherever you’d like. The second one is that Down Them All will keep file names the same unless you tell it to do something different. In the case of the Vatican that’s not ideal, since all the files are named “native.jpg”, so if you don’t do something with the “Renaming Mask” you’ll end up with native.jpg native.jpg(1) native.jpg(2) etc. I like to change the Renaming Mask from the default *name*.*ext* to *flatsubdirs*.*ext* – “flatsubdirs” stands for “flat subdirectories”, and it means the downloaded files will be named according to the path of subdirectories wherever they are being downloaded from. In the case of the Vatican files, a file that lives here:

    http://digi.vatlib.it/iiifimage/MSS_Vat.lat.3773/3396_0-AD0_f11fd975a99e2b099ee569f7667f8b8d0fd922dbc0cf5cd6730cda1a00626794_1469208118412_Vat.lat.3773_0003_pa_0002.jp2/full/full/0/native.jpg

    will be renamed

    iiifimage-MSS_Vat.lat.3773-3396_0-AD0_f11fd975a99e2b099ee569f7667f8b8d0fd922dbc0cf5cd6730cda1a00626794_1469208118412_Vat.lat.3773_0003_pa_0002.jp2-full-full-0.jpg

    This is still a mouthful, but both the shelfmark (Vat.lat.3773) and the page number or folio number are there (here it’s pa_0002.jp2 = page 2, in other manuscripts you’ll see for example fr_0003r.jp2), so it’s simple enough to use Automator or another tool to batch rename the files by removing all the other bits and just leaving the shelfmark and folio or page number.

There are other ways you could do this, too, using Excel to construct the URLs and wget to download, but I think the method outlined here is relatively simple for people who don’t have strong coding skills. Don’t hesitate to ask if you have trouble or questions about this! And please remember that the Vatican manuscript images are not licensed for reuse, so only download them for your own scholarly work.

How to download images using IIIF manifests, Part I: DownThemAll

Tags

,

IIIF manifests are great, but what if you want to work with digital images outside of a IIIF interface? There are a few different ways I’ve figured out that I can use IIIF manifests to download all the images from a manuscript. The exact approach will vary since different institutions construct their image URLs in different ways. Here’s the first approach, which is fairly straightforward and uses e-codices as an example. Tomorrow I’ll post a second post using on the Vatican Digital Library. Please remember that most institutions license their images, so don’t repost or publish images unless the institution specifically allows this in their license.

Method 1: The manifest has urls that resolve directly to image files

This is the easiest method, but it only works if the manifest contains urls that resolve directly to image files. If you can copy a url and paste it into a browser and an image displays, you can use this method. The manifests provided by e-codices follow this approach.

  1. Install DownThemAll, a Firefox browser plugin that allows you to download all the files linked to from a webpage. (There is a similar browser plugin for Chrome, called Get Them All, but it did not recognize the image files linked from the manifest)
  2. Go to e-codices, search for a manuscript, and click the “IIIF manifest” link on the Overview page.
    IIIF manifest link (look for the colorful IIIF logo)

    IIIF manifest link (look for the colorful IIIF logo)

    The manifest will open in the browser. It will look like a mess, but it doesn’t need to look good.

    Messy manifest.

    Messy manifest.

  3. Open DownThemAll. It will recognize all the files linked from the manifest (including .json files, .jpg, .j2, and anything else) and list them. Click the box next to “JPEG Images” at the bottom of the page (under “Filters”). It will highlight all the JPEG images in the list, including the various “default.jpg” images and files ending with “.jp2”

    Screen Shot 2016-07-14 at 4.46.47 PM

    JPEG images highlighted in Down Them All

  4. Now, we only want the images that are named “default.jpg”. These are the “regular” jpeg files; the .jp2 files are the masters and, although you could download them, your browser wouldn’t know what to do with them. So we need to create a new filter so we get only the default.jpg files. To do this, first click “Preferences” in the lower right-hand corner, then click the “Filters” button in the resulting window.
    Filters.

    Filters.

    There they are. To create a new filter, click the “Add New Filter” button, and call the new filter “Default Jpg” (or whatever you like). In the Filtered Extensions field, type “/\/default.jpg” – the filter will select only those files that end with “default.jpg” (yes you do need three slashes there!). Note that you do not need to press save or anything, the filter list updates and saves automatically.

    New filter

    New filter

  5. Return to the main Down Them All view and check the box next to your newly-created filter. Be amazed as all the “default.jpg” files are highlighted.

    AMAZE

    AMAZE

  6. Don’t hit download just yet. If you do, it will download all the files with their given names, and since they are all named “default.jpg” it won’t end well. It will also download them all directly to whatever is specified under “Save Files in” (in my case, my Downloads folder) which also may not be ideal. So you need to change the Renaming Mask to at least give you unique names for each one, and specify where to download all those files. In the case of e-codices the manifest urls include both the manuscript shelfmark and the folio number for each image, so let’s use the Renaming Mask to name the files according to the file page. Simply change *name* to *flatsubdirs* (flat subdirectories). Under “Save Files in”, browse to wherever you want to download all these files.

    Renaming Mask and Save Files in, read to go

    Renaming Mask and Save Files in, read to go

  7. Press “Start” and wait for everything to download.
    Downloading...

    Downloading…

    Congratulations, you have downloaded all the images from this manuscript! You’ll probably want to rename them (if you’re on Mac you can use Automator to do this fairly easily), and you should also save the manifest alongside the images.

    TOMORROW: THE VATICAN!

Manuscript PDFs: Update

My last post was an announcement that I’d posted the University of Pennsylvania’s Schoenberg Collection manuscripts on Google Drive as PDF files, along with details on how I did it. This is a follow-up to announce that I’ve since added PDF files for UPenn’s Medieval and Renaissance Manuscript collection, AND for the Walters Art Museum manuscripts (which are available for download through The Digital Walters).

As with the Schoenberg Manuscripts, these two other collections are in their own folders, along with a spreadsheet you can search and brows to aid in discovery. You are free to download the PDF files and redistribute them as you wish. They are in the public domain.

The main directory for the manuscripts is here.

Enjoy!

Title: Initial "C" with St. Paul trampling Agrippa Form: Historiated initial "C," 12 lines Text: Psalm 97 Comment: The inscriptions on the scrolls read "Paulus/Agr[ipp]a." The second inscription is partially obliterated.

Source: Walters Ms. W.36, Touke Psalter, fol. 89r
Title: Initial “C” with St. Paul trampling Agrippa
Form: Historiated initial “C,” 12 lines
Text: Psalm 97
Comment: The inscriptions on the scrolls read “Paulus/Agr[ipp]a.”
The second inscription is partially obliterated.

 

UPenn’s Schoenberg Manuscripts, now in PDF

Tags

Hi everyone! It’s been almost a year since my last blog post (in which I promised to post more frequently, haha) so I guess it’s time for another one. I actually have something pretty interesting to report!

Last week I gave an invited talk at the Cultural Heritage at Scale symposium at Vanderbilt University. It was amazing. I spoke on OPenn: Primary Digital Resources Available to Everyone, which is the platform we use in the Schoenberg Institute for Manuscript Studies at the University of Pennsylvania Libraries to publish high-resolution digital images and accompanying metadata for all our medieval manuscripts (I also talked for a few minutes about the Schoenberg Database of Manuscripts, which is a provenance database of pre-1600 manuscripts). The philosophy of OPenn is centered on openness: all our manuscript images are in the public domain and our metadata is licensed with Creative Commons licenses, and none of those licenses prohibit commercial use. Next to openness, we embrace simplicity. There is no search facility or fancy interface to the data. The images and metadata files are on a file system (similar to the file systems on your own computer) and browse pages for each manuscript are presented in HTML that is processed directly from the metadata file. (Metadata files are in TEI/XML using the manuscript description element)

Screen Shot 2016-06-10 at 2.20.26 PM

This approach is actually pretty novel. Librarians and faculty scholars alike love their interfaces! And, indeed, after my talk someone came up to me and said, “I’m a humanities faculty member, and I don’t want to have to download files. I just want to see the manuscripts. So why don’t you make them available as PDF so I can use them like that?”

This gave me the opportunity to talk about what OPenn is, and what it isn’t (something I didn’t have time to do in my talk). The humanities scholar who just wants to look at manuscripts is really not the audience for OPenn. If you want to search for and page through manuscripts, you can do that on Penn in Hand, our longstanding page-turning interface. OPenn is about data, and it’s about access. It isn’t for people who want to look at manuscripts, it’s for people who want to build things with manuscript data. So it wouldn’t make sense for us to have PDFs on OPenn – that’s just not what it’s for.

Landing page for Penn in Hand.

Landing page for Penn in Hand.

HOWEVER. However. I’m sympathetic. Many, many people want to look at manuscripts, and PDFs are convenient, and I want to encourage them to see our manuscripts as available to them! So, even if Penn isn’t going to make PDFs available institutionally (at least, not yet – we may in the future), maybe this is something I could do myself. And since all our manuscript data is available on OPenn and licensed for reuse, there is no reason for me not to do it.

So here they are.

If you click that link, you’ll find yourself in a Google Drive folder titled “OPenn manuscript PDFs”. In there is currently one folder, “LJS Manuscripts.” In that folder you’ll fine a link to a Google spreadsheet and over 400 PDF files. The spreadsheet lists all the LJS manuscripts (LJS = Laurence J. Schoenberg, who gifted his manuscripts to Penn in 2012) including catalog descriptions, origin dates, origin locations, and shelfmarks. Let’s say you’re interested in manuscripts from France. You can highlight the Origin column and do a “Find” for “France.” It’s not a fancy search so you’ll have to write down the shelfmarks of the manuscripts as you find them, but it works. Once you know the shelfmarks, go back into the “LJS Manuscripts” folder and find and download the PDF files you want. Note that some manuscripts may have two PDF files, one with “_extra” in the file name. These are images that are included on OPenn but not part of the front-to-back digitization of a manuscript. They might include things like extra shots of the binding, or reference shots.

If you are interested in knowing how I did this, please read on. If not, enjoy the PDFs!

Screen Shot 2016-06-10 at 2.24.32 PM

How I did it

I’ll be honest, this is my favorite part of the exercise so thank you for sticking with me for it! There won’t be a pop quiz at the end although if you want to try this out yourself you are most welcome to.

First I downloaded all the web jpeg files from the LJS collection on OPenn. I used wget to do this, because with wget I am able to get only the web jpeg files from all the collection folders at once. My wget command looked like this:

wget -r -np -A “_web.jpg” http://openn.library.upenn.edu/Data/0001/

Brief translation:

wget = use the wget program
-r = “recursive”, basically means go into all the child folders, not just the folder I’m pointing to
-np = “no parent”, basically means don’t go into the parent folders, no matter what
-A “_web.jpg” = “accept list”, in this case I specified that I only want those files that contain _web.jpg (which all the web jpeg files on OPenn do)
http://openn.library.upenn.edu/Data/0001/ = where all the LJS manuscript data lives

I didn’t use the -nd command, which I usually do (-nd = “no directory”, if you don’t use this command you get the entire file structure for the file server starting from root, which in this case is openn.library.upenn.edu. What this means, practically, is that if you use wget to download one file from a directory five levels up, you get empty folders four levels deep then the top director with one file in it. Not fun. But in this case it’s helpful, and you’ll see why later.

At my house, with a pretty good wireless connection, it took about 5 hours to download everything.

I used Automator to batch create the PDF files. After a bit of googling I found this post on batch creating multipage PDF files from jpeg files. There are some different suggestions, but I opted to use Mac’s Automator. There is a workflow linked from that post. I downloaded that and (because all of the folders of jpeg images I was going to process are in different parent folders) I replaced the first step in the workflow, which was Get Selected Finder Items, with Get Specified Finder Items. This allowed me to search in Automator for exactly what I wanted. So I added all the folders called “web” that were located in the ancestor folder “openn.library.upenn.edu” (which was created when I downloaded all the images from OPenn in the previous step). In this step Automator creates one PDF file named “output.pdf” for each manuscript in the same location as that manuscript’s web jpeg images (in a folder called web – which is important to know later).

Once I created the PDFs, I no longer needed the web jpeg files. So I took some time to delete all the web jpegs. I did this by searching in Finder for “_web.jpg” in openn.library.upenn.edu and then sending them all to Trash. This took ages, but when it was done the only thing in those folders were output.pdf files.

I still had more work to do. I needed to change the names of the PDF files so I would know which manuscripts they represented. Again, after a bit of Googling, I chanced upon this post which includes an AppleScript that did exactly what I needed: it renames files according to the path of their location on the file system. For example, the file “output.pdf” located in Macintosh HD/Users/dorp/Downloads/openn/openn.library.upenn.edu/Data/0001/ljs101/data/web would be renamed “Macintosh HD_Users_dorp_Downloads_openn_openn.library.upenn.edu_Data_0001_ljs101_data_web_001.pdf”. I’d never used AppleScript before so I had to figure that out, but once I did it was smooth sailing – just took a while. (To run the script I copied it into Apple’s Script Editor, hit the play button, and selected openn.library.upenn.edu/Data/0001 when it asked me where I wanted to point the script)

Finally, I had to remove all the extraneous pieces of the file names to leave just the shelfmark (or shelfmark + “extra” for those files that represent the extra images). Automator to the rescue again!

  1. Get Specified Finder Items (adding all PDF files located in the ancestor folder “http://openn.library.upenn.edu”)
  2. Rename Finder Items to replace text (replacing “Macintosh HD_Users_dorp_Downloads_openn_openn.library.upenn.edu_Data_0001_” with nothing) –
  3. Rename Finder Items to replace text (replacing “_data_web_001” with nothing)
  4. Rename Finder Items to replace text (replacing “_data_extra_web_001” with “_extra” – this identifies PDFs that are for “extra” images)

The last thing I had to do was to move them into Google Docs. Again, I just searched for “.pdf” in Finder (just taking those that are in openn.libraries.upenn.edu/Data/0001) and dragged them into Google Drive.

All done!

The spreadsheet I generated by running an XSLT script over the TEI manuscript descriptions (it’s a spreadsheet I created a couple of years ago when I first uploaded data about the Penn manuscripts on Viewshare. Leave a comment or send me a note if that sounds interesting and I’ll make a post on it.

 

It’s been a while since I rapped at ya

I’m not dead! I’m just really bad when it comes to blogging. I’m better at Facebook, and somewhat better at Twitter (and Twitter), and I do my best to update Tumblr.

The stated purpose of this blog is to give technical details of my work. This mostly involves finding data, and moving it around from one format to another. I use XSLT, because it’s what I know, although I’ve seen the promise of Python and I may eventually take the time to learn it well. I don’t know when that will happen, though.

I’ve taken to posting files and documentation on Github, so if you’re curious you can look there. If you’re familiar with my interests, and you share them, the most interesting things will be VisColl, a developing system for generating visualizations of manuscript codices showing elements of physical construction, DistributionVis which is, as described on Github, “a wee script to visualize the distribution of illustration in manuscripts from the Walters Art Museum,” and ebooks, files I use to start the process of building ebooks from our digitized collection. (Finished ebooks are archived in UPenn’s institutional repository, you can download them there)

VisColl – quire with an added leaf

DistributionVis – Different color lines refer to different types of illustrations or texts.

VisColl has legs, a lot of interest in the community, and is part of a major grant from the Mellon Foundation to the University of Toronto. Woohoo! DistributionVis is something I threw together in an afternoon because I wanted to see if I could. I thought ebooks were a nice way to provide a different way for people to access our collection. I’ve no idea if either of these two are any use to anyone, but I put them out there, because why not?

I do a lot of putting-things-out-there-because-why-not – it seems to be what I’m best at. So I’m going to continue doing that. And when I do, I shall try my very best to put it here too!

Until next time…

Disbinding Some Manuscripts, and Rebinding Some Others (presented at ICMS, Kalamazoo, MI, May 2014)

I presented my collaborative project on visualizing collation at the International Congress on Medieval Studies in Kalamazoo, Michigan, last week, and it was really well received. Also last week I discovered the Screen Recording function in QuickTime on my Mac. So, I thought it might be interesting to re-present the Kalamazoo talk in my office and record it so people who weren’t able to make the talk could still see what we are up to. I think this is longer than the original presentation – 23 minutes! – so feel free to skip around if it gets boring. Also there is no editing, so um ah um sorry about that. (Watch out for a noise at 18:16, I think my hand brushed the microphone, it’s unnerving if you’re not expecting it)

We’ll also be presenting this work as a poster/demo at the Digital Humanities 2014 Conference in Lausanne this July.

How to get MODS using the NYPL Digital Collections API

Last week I figured out how to batch-download MODS records from the NYPL Digital Collections API (http://api.repo.nypl.org/) using my limited set of technical skills, so I thought I would share my process here.

I had a few tools at my disposal. First, I’m on a Macbook. I’m not sure how I would have done this had I been on a Windows machine. Second, I’m pretty good with XSLT. Although I have some experience with some other languages (javascript, python, perl) I’m not really good at them. It’s possible one could do something like this using other languages and it would be more effective – but I use what I know. I also had a browser, which came in handy in the first step.

The first thing I had to do is find all the objects that I wanted to get the MODS for. I wanted all the medieval objects (surprise!), so to get as broad a search as possible I opted for the “Search across all MODS fields” option (Method 4 in the API Documentation), which involves constructing a URL to stick in a browser. Because the most results the API will return on a single search is 500, I included that limit in my search. I ended up constructing four URLs, since it turned out there were between 1500 and 2000 objects:

I plugged these into my browser, then just saved these result pages as XML files in a directory on my Mac. Each of these results pages had a brief set of fields for each object: UUID (the unique identifier for the objects, and the thing I needed to use to get the MODS), title, typeOfResource, imageID, and itemLink (the URL for the object in the NYPL Digital Collections website).

Next, I had to figure out how to feed the UUIDs back into the API. I thought about this for most of a day, and an evening, and then a morning. I tapped my network for some suggestions, and it wasn’t until Conal Tuohy suggested using document() in XSLT that I thought XSLT might actually work.

To get the MODS record for any UUID, you need to simply construct a URL that points to the MODS record on the NYPL file directory. They look like this:

http://api.repo.nypl.org/api/v1/items/mods/[UUID].xml

For my first attempt, I wrote an XSLT document that used document(), constructing pointers to each MODS record when processed over the result documents I saved from my browser. Had this worked, it would have pulled all the MODS records into a new document during processing. I use Oxygen for most all of my XML work, including processing, but when I tried to process my first result document I got an I/O error. Of course I did – the API doesn’t allow just any old person in. You need to authenticate, and when you sign up with the API they send you an authentication token. There may be some way to authenticate through Oxygen, but if so I couldn’t figure it out. So, back to the drawing board.

Over lunch on the second day, I picked the brain of my colleague Doug Emery. Doug and I worked together on the Walters BookReaders (which are elsewhere on this site), and I trust him to give good advice. We didn’t have much time, but he suggested using a curl request through the terminal on my Mac – maybe I could do something like that? I had seen curl mentioned on the API documentation as well, but I hadn’t heard of it and certainly hadn’t used it before. But I went back to my office and did some research.

Basically, curl is a command-line tool for grabbing the content of whatever is at the other end of a URL. You give it a URL, and it sends back whatever is on the other end. So, if you send out the URL for an NYPL MODS record, it will send the MODS record back. There’s an example on the NYPL API documentation page which incorporates the authentication token. Score!

curl “http://api.repo.nypl.org/api/v1/items?identifier_type=local_bnumber&identifier_val=b11722689” -H ‘Authorization: Token token=”abcdefghijklmn”‘ where ‘abcdefghijklmn’ is the authentication token you receive when you sign up (link coming soon).

Next, I needed to figure out how to send between 1500 and 2000 URLs through my terminal, without having to do them one by one. Through a bit of Google searching I discovered that it’s possible to replace the URL in the command with a pointer to a text file containing a list of URLs, in the format url = [url]. So I wrote a short XSLT that I used to process over all four of the result documents, pulling out the UUIDs, constructing URLs that pointed to the corresponding MODS records, and putting them in the correct format. Then I put pointers to those documents in my curl command:

curl -K “nypl_medieval_4_forCurl.txt” -H ‘Authorization: Token token=”[my_token]”‘> test.xml

Voila – four documents chock full of MODS goodness. And I was able to do it with a Mac terminal and just a little bit of XSLT.

Q: How do you teach TEI in an hour?

A: You don’t! But you can provide a substantial introduction to the concept of the TEI, and explain how it functions.

On June 4 I participated in PhillyDH@Penn, a day of workshops and unconference sessions sponsored by PhillyDH and held in my own beloved Van Pelt-Dietrich Library on the University of Pennsylvania campus. I was sick, so I wasn’t able to participate fully, but I was able to lead a one-hour Introduction to TEI. I aimed it at absolute beginners, with the intention to a) Give the audience an idea of what TEI is and what it’s for (to help them answer the question, Is TEI really what I need?) and b) explain enough about the TEI so they will know a bit of something walking into their first “real” (multi-hour, hands-on) TEI workshop. I got a lot of good feedback, so hopefully it did its job. And I do hope to have the opportunity to follow this up with more substantial workshops.

Slides (in PDF format) are posted here.

EDIT: Need to add that these slides owe a ton to James Cummings, with whom I have taught TEI and to whom I owe much of what I know about it!