UPenn’s Schoenberg Manuscripts, now in PDF

Hi everyone! It’s been almost a year since my last blog post (in which I promised to post more frequently, haha) so I guess it’s time for another one. I actually have something pretty interesting to report!

Last week I gave an invited talk at the Cultural Heritage at Scale symposium at Vanderbilt University. It was amazing. I spoke on OPenn: Primary Digital Resources Available to Everyone, which is the platform we use in the Schoenberg Institute for Manuscript Studies at the University of Pennsylvania Libraries to publish high-resolution digital images and accompanying metadata for all our medieval manuscripts (I also talked for a few minutes about the Schoenberg Database of Manuscripts, which is a provenance database of pre-1600 manuscripts). The philosophy of OPenn is centered on openness: all our manuscript images are in the public domain and our metadata is licensed with Creative Commons licenses, and none of those licenses prohibit commercial use. Next to openness, we embrace simplicity. There is no search facility or fancy interface to the data. The images and metadata files are on a file system (similar to the file systems on your own computer) and browse pages for each manuscript are presented in HTML that is processed directly from the metadata file. (Metadata files are in TEI/XML using the manuscript description element)

Screen Shot 2016-06-10 at 2.20.26 PM

This approach is actually pretty novel. Librarians and faculty scholars alike love their interfaces! And, indeed, after my talk someone came up to me and said, “I’m a humanities faculty member, and I don’t want to have to download files. I just want to see the manuscripts. So why don’t you make them available as PDF so I can use them like that?”

This gave me the opportunity to talk about what OPenn is, and what it isn’t (something I didn’t have time to do in my talk). The humanities scholar who just wants to look at manuscripts is really not the audience for OPenn. If you want to search for and page through manuscripts, you can do that on Penn in Hand, our longstanding page-turning interface. OPenn is about data, and it’s about access. It isn’t for people who want to look at manuscripts, it’s for people who want to build things with manuscript data. So it wouldn’t make sense for us to have PDFs on OPenn – that’s just not what it’s for.

Landing page for Penn in Hand.
Landing page for Penn in Hand.

HOWEVER. However. I’m sympathetic. Many, many people want to look at manuscripts, and PDFs are convenient, and I want to encourage them to see our manuscripts as available to them! So, even if Penn isn’t going to make PDFs available institutionally (at least, not yet – we may in the future), maybe this is something I could do myself. And since all our manuscript data is available on OPenn and licensed for reuse, there is no reason for me not to do it.

So here they are.

If you click that link, you’ll find yourself in a Google Drive folder titled “OPenn manuscript PDFs”. In there is currently one folder, “LJS Manuscripts.” In that folder you’ll fine a link to a Google spreadsheet and over 400 PDF files. The spreadsheet lists all the LJS manuscripts (LJS = Laurence J. Schoenberg, who gifted his manuscripts to Penn in 2012) including catalog descriptions, origin dates, origin locations, and shelfmarks. Let’s say you’re interested in manuscripts from France. You can highlight the Origin column and do a “Find” for “France.” It’s not a fancy search so you’ll have to write down the shelfmarks of the manuscripts as you find them, but it works. Once you know the shelfmarks, go back into the “LJS Manuscripts” folder and find and download the PDF files you want. Note that some manuscripts may have two PDF files, one with “_extra” in the file name. These are images that are included on OPenn but not part of the front-to-back digitization of a manuscript. They might include things like extra shots of the binding, or reference shots.

If you are interested in knowing how I did this, please read on. If not, enjoy the PDFs!

Screen Shot 2016-06-10 at 2.24.32 PM

How I did it

I’ll be honest, this is my favorite part of the exercise so thank you for sticking with me for it! There won’t be a pop quiz at the end although if you want to try this out yourself you are most welcome to.

First I downloaded all the web jpeg files from the LJS collection on OPenn. I used wget to do this, because with wget I am able to get only the web jpeg files from all the collection folders at once. My wget command looked like this:

wget -r -np -A “_web.jpg” http://openn.library.upenn.edu/Data/0001/

Brief translation:

wget = use the wget program
-r = “recursive”, basically means go into all the child folders, not just the folder I’m pointing to
-np = “no parent”, basically means don’t go into the parent folders, no matter what
-A “_web.jpg” = “accept list”, in this case I specified that I only want those files that contain _web.jpg (which all the web jpeg files on OPenn do)
http://openn.library.upenn.edu/Data/0001/ = where all the LJS manuscript data lives

I didn’t use the -nd command, which I usually do (-nd = “no directory”, if you don’t use this command you get the entire file structure for the file server starting from root, which in this case is openn.library.upenn.edu. What this means, practically, is that if you use wget to download one file from a directory five levels up, you get empty folders four levels deep then the top director with one file in it. Not fun. But in this case it’s helpful, and you’ll see why later.

At my house, with a pretty good wireless connection, it took about 5 hours to download everything.

I used Automator to batch create the PDF files. After a bit of googling I found this post on batch creating multipage PDF files from jpeg files. There are some different suggestions, but I opted to use Mac’s Automator. There is a workflow linked from that post. I downloaded that and (because all of the folders of jpeg images I was going to process are in different parent folders) I replaced the first step in the workflow, which was Get Selected Finder Items, with Get Specified Finder Items. This allowed me to search in Automator for exactly what I wanted. So I added all the folders called “web” that were located in the ancestor folder “openn.library.upenn.edu” (which was created when I downloaded all the images from OPenn in the previous step). In this step Automator creates one PDF file named “output.pdf” for each manuscript in the same location as that manuscript’s web jpeg images (in a folder called web – which is important to know later).

Once I created the PDFs, I no longer needed the web jpeg files. So I took some time to delete all the web jpegs. I did this by searching in Finder for “_web.jpg” in openn.library.upenn.edu and then sending them all to Trash. This took ages, but when it was done the only thing in those folders were output.pdf files.

I still had more work to do. I needed to change the names of the PDF files so I would know which manuscripts they represented. Again, after a bit of Googling, I chanced upon this post which includes an AppleScript that did exactly what I needed: it renames files according to the path of their location on the file system. For example, the file “output.pdf” located in Macintosh HD/Users/dorp/Downloads/openn/openn.library.upenn.edu/Data/0001/ljs101/data/web would be renamed “Macintosh HD_Users_dorp_Downloads_openn_openn.library.upenn.edu_Data_0001_ljs101_data_web_001.pdf”. I’d never used AppleScript before so I had to figure that out, but once I did it was smooth sailing – just took a while. (To run the script I copied it into Apple’s Script Editor, hit the play button, and selected openn.library.upenn.edu/Data/0001 when it asked me where I wanted to point the script)

Finally, I had to remove all the extraneous pieces of the file names to leave just the shelfmark (or shelfmark + “extra” for those files that represent the extra images). Automator to the rescue again!

  1. Get Specified Finder Items (adding all PDF files located in the ancestor folder “http://openn.library.upenn.edu”)
  2. Rename Finder Items to replace text (replacing “Macintosh HD_Users_dorp_Downloads_openn_openn.library.upenn.edu_Data_0001_” with nothing) –
  3. Rename Finder Items to replace text (replacing “_data_web_001” with nothing)
  4. Rename Finder Items to replace text (replacing “_data_extra_web_001” with “_extra” – this identifies PDFs that are for “extra” images)

The last thing I had to do was to move them into Google Docs. Again, I just searched for “.pdf” in Finder (just taking those that are in openn.libraries.upenn.edu/Data/0001) and dragged them into Google Drive.

All done!

The spreadsheet I generated by running an XSLT script over the TEI manuscript descriptions (it’s a spreadsheet I created a couple of years ago when I first uploaded data about the Penn manuscripts on Viewshare. Leave a comment or send me a note if that sounds interesting and I’ll make a post on it.

 

It’s been a while since I rapped at ya

I’m not dead! I’m just really bad when it comes to blogging. I’m better at Facebook, and somewhat better at Twitter (and Twitter), and I do my best to update Tumblr.

The stated purpose of this blog is to give technical details of my work. This mostly involves finding data, and moving it around from one format to another. I use XSLT, because it’s what I know, although I’ve seen the promise of Python and I may eventually take the time to learn it well. I don’t know when that will happen, though.

I’ve taken to posting files and documentation on Github, so if you’re curious you can look there. If you’re familiar with my interests, and you share them, the most interesting things will be VisColl, a developing system for generating visualizations of manuscript codices showing elements of physical construction, DistributionVis which is, as described on Github, “a wee script to visualize the distribution of illustration in manuscripts from the Walters Art Museum,” and ebooks, files I use to start the process of building ebooks from our digitized collection. (Finished ebooks are archived in UPenn’s institutional repository, you can download them there)

VisColl – quire with an added leaf
DistributionVis – Different color lines refer to different types of illustrations or texts.

VisColl has legs, a lot of interest in the community, and is part of a major grant from the Mellon Foundation to the University of Toronto. Woohoo! DistributionVis is something I threw together in an afternoon because I wanted to see if I could. I thought ebooks were a nice way to provide a different way for people to access our collection. I’ve no idea if either of these two are any use to anyone, but I put them out there, because why not?

I do a lot of putting-things-out-there-because-why-not – it seems to be what I’m best at. So I’m going to continue doing that. And when I do, I shall try my very best to put it here too!

Until next time…

Disbinding Some Manuscripts, and Rebinding Some Others (presented at ICMS, Kalamazoo, MI, May 2014)

I presented my collaborative project on visualizing collation at the International Congress on Medieval Studies in Kalamazoo, Michigan, last week, and it was really well received. Also last week I discovered the Screen Recording function in QuickTime on my Mac. So, I thought it might be interesting to re-present the Kalamazoo talk in my office and record it so people who weren’t able to make the talk could still see what we are up to. I think this is longer than the original presentation – 23 minutes! – so feel free to skip around if it gets boring. Also there is no editing, so um ah um sorry about that. (Watch out for a noise at 18:16, I think my hand brushed the microphone, it’s unnerving if you’re not expecting it)

We’ll also be presenting this work as a poster/demo at the Digital Humanities 2014 Conference in Lausanne this July.

How to get MODS using the NYPL Digital Collections API

Last week I figured out how to batch-download MODS records from the NYPL Digital Collections API (http://api.repo.nypl.org/) using my limited set of technical skills, so I thought I would share my process here.

I had a few tools at my disposal. First, I’m on a Macbook. I’m not sure how I would have done this had I been on a Windows machine. Second, I’m pretty good with XSLT. Although I have some experience with some other languages (javascript, python, perl) I’m not really good at them. It’s possible one could do something like this using other languages and it would be more effective – but I use what I know. I also had a browser, which came in handy in the first step.

The first thing I had to do is find all the objects that I wanted to get the MODS for. I wanted all the medieval objects (surprise!), so to get as broad a search as possible I opted for the “Search across all MODS fields” option (Method 4 in the API Documentation), which involves constructing a URL to stick in a browser. Because the most results the API will return on a single search is 500, I included that limit in my search. I ended up constructing four URLs, since it turned out there were between 1500 and 2000 objects:

I plugged these into my browser, then just saved these result pages as XML files in a directory on my Mac. Each of these results pages had a brief set of fields for each object: UUID (the unique identifier for the objects, and the thing I needed to use to get the MODS), title, typeOfResource, imageID, and itemLink (the URL for the object in the NYPL Digital Collections website).

Next, I had to figure out how to feed the UUIDs back into the API. I thought about this for most of a day, and an evening, and then a morning. I tapped my network for some suggestions, and it wasn’t until Conal Tuohy suggested using document() in XSLT that I thought XSLT might actually work.

To get the MODS record for any UUID, you need to simply construct a URL that points to the MODS record on the NYPL file directory. They look like this:

http://api.repo.nypl.org/api/v1/items/mods/[UUID].xml

For my first attempt, I wrote an XSLT document that used document(), constructing pointers to each MODS record when processed over the result documents I saved from my browser. Had this worked, it would have pulled all the MODS records into a new document during processing. I use Oxygen for most all of my XML work, including processing, but when I tried to process my first result document I got an I/O error. Of course I did – the API doesn’t allow just any old person in. You need to authenticate, and when you sign up with the API they send you an authentication token. There may be some way to authenticate through Oxygen, but if so I couldn’t figure it out. So, back to the drawing board.

Over lunch on the second day, I picked the brain of my colleague Doug Emery. Doug and I worked together on the Walters BookReaders (which are elsewhere on this site), and I trust him to give good advice. We didn’t have much time, but he suggested using a curl request through the terminal on my Mac – maybe I could do something like that? I had seen curl mentioned on the API documentation as well, but I hadn’t heard of it and certainly hadn’t used it before. But I went back to my office and did some research.

Basically, curl is a command-line tool for grabbing the content of whatever is at the other end of a URL. You give it a URL, and it sends back whatever is on the other end. So, if you send out the URL for an NYPL MODS record, it will send the MODS record back. There’s an example on the NYPL API documentation page which incorporates the authentication token. Score!

curl “http://api.repo.nypl.org/api/v1/items?identifier_type=local_bnumber&identifier_val=b11722689” -H ‘Authorization: Token token=”abcdefghijklmn”‘ where ‘abcdefghijklmn’ is the authentication token you receive when you sign up (link coming soon).

Next, I needed to figure out how to send between 1500 and 2000 URLs through my terminal, without having to do them one by one. Through a bit of Google searching I discovered that it’s possible to replace the URL in the command with a pointer to a text file containing a list of URLs, in the format url = [url]. So I wrote a short XSLT that I used to process over all four of the result documents, pulling out the UUIDs, constructing URLs that pointed to the corresponding MODS records, and putting them in the correct format. Then I put pointers to those documents in my curl command:

curl -K “nypl_medieval_4_forCurl.txt” -H ‘Authorization: Token token=”[my_token]”‘> test.xml

Voila – four documents chock full of MODS goodness. And I was able to do it with a Mac terminal and just a little bit of XSLT.

Q: How do you teach TEI in an hour?

A: You don’t! But you can provide a substantial introduction to the concept of the TEI, and explain how it functions.

On June 4 I participated in PhillyDH@Penn, a day of workshops and unconference sessions sponsored by PhillyDH and held in my own beloved Van Pelt-Dietrich Library on the University of Pennsylvania campus. I was sick, so I wasn’t able to participate fully, but I was able to lead a one-hour Introduction to TEI. I aimed it at absolute beginners, with the intention to a) Give the audience an idea of what TEI is and what it’s for (to help them answer the question, Is TEI really what I need?) and b) explain enough about the TEI so they will know a bit of something walking into their first “real” (multi-hour, hands-on) TEI workshop. I got a lot of good feedback, so hopefully it did its job. And I do hope to have the opportunity to follow this up with more substantial workshops.

Slides (in PDF format) are posted here.

EDIT: Need to add that these slides owe a ton to James Cummings, with whom I have taught TEI and to whom I owe much of what I know about it!

New Job

I have a new job! As of April 1, I am the Curator, Digital Research Services, Schoenberg Institute for Manuscript Studies, Special Collections Center, University of Pennsylvania.

Now say that five times fast.

In this role, I’m responsible for the digital initiatives coming out of SIMS. I’m also a curator for the Schoenberg Manuscript collection (as are all SIMS staffers in the SCC), and I manage the Vitale Special Collections Digital Media Lab (Vitale 2). It’s also clear that I will have some broad role in the digital humanities at UPenn, although that part of the job isn’t so clear to me yet. I’m the inaugural DRS Curator, so there is still a lot to be worked out although after three weeks on the job, I’m feeling really confident that things are under control.

I report to Will Noel, Director of the SCC and Director of SIMS. Will may be best known as the Project Director for the Archimedes Palimpsest Project at the Walters Art Museum. Will is awesome, and I’m thrilled to be working for him. I’m also thrilled to be working at the University of Pennsylvania, in a beautiful brand new space, in the great city of Philadelphia.

I plan to blog more in this new position, so keep your eyes here and on the MESA blog.

Medieval Electronic Scholarly Alliance

It’s not that I haven’t been doing anything; I just haven’t been doing anything here.

In June, Tim Stinson at NCSU and I were awarded a grant from the Andrew W. Mellon Foundation to implement the Medieval Electronic Scholarly Alliance, MESA, and since we started preparing metadata for indexing in, oh, about mid-September, all of my technical energy has been going into that work. MESA is a node of the Advanced Research Consortium (ARC), and follows the model of scholarly federation spearheaded by NINES and followed by 18thconnect, using Collex as its underlying software support.

What does it involve? It involves taking metadata already created by digital collections or projects (like, for example, the Walters Art Museum, Parker on the Web, or e-Codices), mapping their fields out to the Collex and MESA fields, and then generating RDF that follows to the Collex guidelines. It’s these three collections (and, just in the last week and with a ton of help from Lisa McAulay at UCLA, the St. Gall Project) that I’ve spent the most time on, and here are some key things I’ve learned so far that might be helpful for people considering adding their collections to MESA (or to any of the other ARC nodes, for that matter)… or for that matter, for anyone who is interested in sharing their data, any time ever.

  1. If your metadata is in an XML format, put unique identifiers on all the tags. Just do it.
  2. Follow best practices for file and folder naming. In this case, best practice probably just means BE CONSISTENT. My favorite example for illustrating consistency is probably the Digital Walters at the Walters Art Museum. One of the reasons that it’s a great example is that their data (image files and metadata) has been released under Open Access licensing, with the intention that people (like me) will come along and grab them, and work with them. What this means is that everything was designed with this usage in mind, so it’s simple, and that all their practices are well-documented, so it’s easy to take them and run without having to figure a lot out. Their file and folder naming practices are documented here.
  3. Be thoughtful about how you format your dates. Yeah, dates are a big deal. The metadata of one of the projects (I won’t say which one) had dates formatted as, e.g., “xii early-xiii late (?)”, and with no ISO (or an other even slightly computer-readable) version in, e.g., an attribute value. I ended up mapping every individual date value to a computer-readable form. As always, I figure there was a better way to do it, but at the time… well, it took a while, but in the end it did what I needed it to do. If you are in the process of setting up a new project, consider including computer-readable dates in your metadata. Because Collex/MESA includes values for both date label and date value, you can include both a computer-readable date to be used for searching, and a human-readable version, which may have more nuance than can be included in a computer readable value. For example, LABEL: xii early-xiii late (?); VALUE: 800,900.
  4. Is whatever you are describing in English? Great! If it’s not, find someplace in your metadata to indicate what language it is in. Use a controlled vocabulary, such as the ISO 639-2 Language Code List, and as with dates it helps to include a computer-readable code in addition to a human-readable version. (This is less important for MESA, as we can map between the columns in the ISO 639-2 Language Code List, but it’s a good general rule to follow)
  5. You can actually do a lot with a little. In the past few weeks the metadata I’ve worked with range from really amazing full manuscript Descriptions (in TEI P5) from the Walters Art Museum to very simple and basic descriptions in Dublin Core from e-Codices. Although the full descriptions provide more information (helpful for full-text searching), the simple DC records, assuming they include the basic information you’d want for searching (title, date, language, maybe incipits) are just fine. So don’t let a lack of detail put you off from contributing to MESA!

I would also encourage you to release your data under an Open Access license (something from the Creative Commons), and to make your data easy to grab (either by posting everything online, as the Walters Art Museum has done, or by releasing it through an OAI provider, as e-Codices has done).

Manuscript-level records in Omeka

I had some time tonight to figure out how to get the manuscript-level records into Omeka.

I’d already worked out the (very simple) XPath needed to pull out the few bits of information I wanted from the TEI msDesc: siglum, title, URL to the Walters data, BookReader URL, and URL to the first thumbnail image in the ms (which I’m just using as an illustration, a placeholder for the full ms). This evening I did the final checks to make sure the output was right, then just ran the XSLT against all the msDesc files and created .csv files for each one (there’s an Omeka plug-in that will accept CSV input, in order to bulk ingest metadata and files). I used the “insert file contents” function in TextWrangler (bless that program) to pull all of the individual CSV rows into a single document, then ingested that into Omeka. There were a few bugs, of course, but generally it was smooth. I’ve made a few of the records live, just those that already have illumination records tagged in Omeka too. What this means is that you can now go to the Omeka site, go to “Browse Items by Tag” (http://www.dotporterdigital.org/omeka/items/tags), and click on one of the larger tags (each ms and illumination is tagged with the ms siglum; the more illustrations there are, the larger the tag will appear in the browse list). At the moment the first entry in the list will be the record for the manuscript, followed by the record for the illuminations… although I don’t know if that’s just because the manuscript records are newer.

I would like to include each illumination in the ms records (HasPart) and the ms in the illumination records too (IsPartOf), but I am not certain that’s something I’ll be able to do programmatically. Anyway, I think that is the next thing on my list. That and tagging all of the other illumination records with the siglum (so they will be browseable with the manuscript).

Walters in Omeka

This evening I spent some time thinking about the best way to organize the Digital Walters data into Omeka.

I’ve already experimented with bulk ingesting all of the illuminations (pulling all the decoDesc tags from the TEI manuscript descriptions, and creating a record in Omeka for each one). You can see these in the Omeka instance (although it’s not very pretty). I realized that, as fun as that experiment was, in order for it to be useful I need to take a step back and reevaluate how best to move forward.

I created a record for one of the manuscripts: http://www.dotporterdigital.org/omeka/items/show/2618. It’s basic, including the Title (and the siglum under Alternative Title), links to the manuscript’s home on the Digital Walters site and its Bookreader version on this site (both under Description), and under Has Part, links to an illumination record in Omeka, an illumination that appears in that manuscript.

I want to do a few things to start out:

1) Create one record for each manuscript. I will do this using Omeka’s CSV plug-in… I’ve figured out how to pull all of the information I need from each of the TEI MS Description files, now I need to figure out how to pull all of it into one file and make that file a CSV file. Ithink I can use Xinclude to do that but I need to try more than I had time to tonight.

2) I would like to have a way to automatically attach the illumination records that are already in Omeka to the new manuscript records. The link that’s in the test record is one I added by hand, but Omeka has a collection hierarchy and I need to play with that to see if there might be something in there that can be used for this purpose. What I fear is that the hierarchy is only at the full level – that is, I can say that all of the illuminations are under all of the manuscripts, but I can’t say that some subset of the illuminations are under some particular manuscript. I will need to find out more!

It’s good to be back in the site. I still have an article to finish (and another already started) but I would like to make some progress on the Omeka catalog in the next month or so.