Zombie Manuscripts: Digital Facsimiles in the Uncanny Valley

This is a version of a paper presented at the International Congress on Medieval Studies, May 12, 2018, in session 482, Digital Skin II: ‘Franken-Manuscripts’ and ‘Zombie Books’: Digital Manuscript Interfaces and Sensory Engagement, sponsored by Information Studies (HATII), Univ. of Glasgow, and organized by Dr. Johanna Green.

The uncanny valley was described by Masahiro Mori in a 1970 article in the Japanese journal Energy, and it wasn’t translated into English completely until 2012.[1] In this article, Mori discusses how he envisions people responding to robots as they become more like humans. The article is a thought piece – that is, it’s not based on any data or study. In the article, which we’ll walk through closely over the course of this presentation, Mori posits a graph, with human likeness on the x axis and affinity on the y axis. Mori’s proposition is that, as robots become more human-like, we have greater affinity for them, until they reach a point at which the likeness becomes creepy, or uncanny, leading to a sudden dip into negative affinity – the uncanny valley.

Now, Mori defined the uncanny valley specifically in relation to robotics, but I think it’s an interesting thought exercise to see how we can plot various presentations of digitized medieval manuscripts along the affinity/likeness axes, and think about where the uncanny valley might fall.

In 2009 I presented a paper, “Reading,
 Writing,
 Building: 
the 
Old
 English
Illustrated
 Hexateuch,” (unpublished but archived in the Indiana University institutional repository) in which I considered the uncanny valley in relation to digital manuscript editions. This consideration followed a long description of the “Turning the Pages Virtualbook” technology which was then being developed at the British Library, of which I was quite critical. At that time, I said:

In my mind, the models created by Turning the Pages™ fall at the nadir of the “uncanny valley of digital texts” – which has perhaps a plain text transcription at one end and the original manuscript at the other end, with print facsimiles and editions, and the various digital displays and visualizations presented earlier in this paper falling somewhere between the plain text and the lip above the chasm.

Which would plot out something like this on the graph. (Graph was not included in the original 2009 paper)

Dot’s 2009 Conception of the Uncanny Valley of Manuscripts

Nine years of thinking on this and learning more about how digital manuscripts are created and how they function, I’m no longer happy with this arrangement. Additionally, in 2009 I was working with imperfect knowledge of Mori’s proposition – the translation of the article I referred to then was an incomplete translation from 2005, and included a single, simplified graph in place of the two graphs from the original article – which we will look at later in this talk.

Manuscripts aren’t people, and digitized manuscripts aren’t robots, so before we start I want to be clear about what exactly I’m thinking about here. Out of Mori’s proposition I distill four points relevant to our manuscript discussion:

First, Robots are physical objects that resemble humans more or less (that is the x-axis of the graph)

Second, as robots become more human-like, people have greater affinity for them (until they don’t – uncanny valley) – this is the y-axis of the graph

Third, the peak of the graph is a human, not the most human robot

Fourth, the graph refers to robots and to humans generally, not robots compared to a specific human.

Four parallel points can be drawn to manuscripts:

First, digitized manuscripts are data about manuscripts (digital images + structural metadata + additional data) that are presented on computers. Digitized manuscripts are pieces, and in visualizing the manuscript on a computer we are reconstructing them in various ways. (Given the theme of the session I want to point out that this description makes digitized manuscripts sound a lot more like Frankenstein’s creature than like a traditional zombie, and I’m distraught that I don’t have time to investigate this concept further today) These presentations resemble the parent manuscript more or less (this is the x-axis)

Second, as presentations of digitized manuscripts become more manuscript-like, people have greater affinity for them (until they don’t – uncanny valley) – this is the y-axis

Third, the peak of the graph is the parent manuscript, not the most manuscript-like digital presentation

Fourth, the graph refers to a specific manuscript, not to manuscripts generally

I think that this is going to be the major difference in applying the concept of the uncanny valley to manuscripts vs. robots: while Robots are general, not specific (i.e., they are designed and built to imitate humans and not specific people), the ideal (i.e., most manuscript-like) digital presentation of a manuscript would need to be specific, not general (i.e., it would need to be designed to look and act like the parent manuscript, not like any old manuscript)

Now let’s move on to Affinity

A Valley in One’s Sense of Affinity

Mori’s article is divided into four sections, the first being “A Valley in One’s Sense of Affinity”. In this section Mori describes what he means by affinity and how affinity is affected by sensory input. Figure one in this section is the graph we saw before, which starts with an Industrial Robot (little likeness, little affinity), then a Toy Robot (more likeness, more affinity), then drops to negative affinity at about 80-85% likeness, with Prosthetic Hand at negative affinity and Bunraku Puppet on the steep rise to positive affinity and up to Healthy Person.

For Mori, sensory input beyond the visual is important for an object’s placement on the x-axis. An object might look very human, but if it feels strange, that doesn’t only send the affinity into the negative, but it also lessens the likeness. Mori’s original argument focuses on prosthetic hands, specifically about realistic prosthetic hands, which cannot be distinguished at a glance from real ones. I’m afraid the language in his example is abelist so I don’t want to quote him,

Luke Skywalker’s prosthetic hand in The Empire Strikes Back

but his argument is essentially that a very realistic prosthetic hand, when one touches it and realizes it is not a real hand (as one had been led to believe), it becomes uncanny. Relating this feeling to the graph, Mori says, “In mathematical terms, this can be represented by a negative value. Therefore, in this case, the appearance of the prosthetic hand is quite humanlike, but the level of affinity is negative, thus placing the hand near the bottom of the valley in Figure 1.”

The character Osono, from the play Hade Sugata Onna Maiginu (艶容女舞衣), in a performance by the Tonda Puppet Troupe of Nagahama, Shiga Prefecture. https://en.wikipedia.org/wiki/Bunraku#/media/File:Osonowiki.jpg (CC:BY:SA)

Bunraku puppets, while not actually resembling humans physically as strongly as a very realistic prosthetic hand visually resembles a human hand, fall farther up the graph both in terms of likeness and in affinity. Mori makes it clear that likeness is not only, or even mostly, a visual thing. He says:

I don’t think that, on close inspection, a bunraku  puppet appears similar to a human being. Its realism in terms of size, skin texture, and so on, does not even reach that of a realistic prosthetic hand. But when we enjoy a puppet show in the theater, we are seated at a certain distance from the stage. The puppet’s absolute size is ignored, and its total appearance, including hand and eye movements, is close to that of a human being. So, given our tendency as an audience to become absorbed in this form of art, we might feel a high level of affinity for the puppet.

So it’s not that bunraku puppets look like humans in great detail, but when we experience them within the context of the puppet show they have the affect of being very human-like, thus they are high on the human likeness scale.

For a book-related parallel I want to quote briefly a blog post, brought to my attention earlier this week, by Sean Gilmore. Sean is an undergraduate student at Colby College and this past semester took Dr. Megan Cook’s Book History course, for which he wrote this post, “Zombie Books; Digital Facsimiles for the Dotty Dimple Stories.” There’s nothing in this post to suggest that Sean is familiar with the uncanny valley, but I was tickled with his description of reading a digital facsimile of a printed book. Sean says:

In regards to reading experience, reading a digital facsimile could not be farther from the experience of reading from the Dotty Dimple box set. The digital facsimile does in truth feel like reading a “zombie book”. While every page is exactly the same as the original copy in the libraries of the University of Minnesota, it feels as though the book has lost its character. When I selected my pet book from Special Collection half of the appeal of the Dotty Stories was the small red box they came in, the gold spines beckoning, almost as if they were shouting out to be read. This facsimile, on the other hand, feels like a taxidermy house cat; it used to be a real thing, but now it feels hollow, and honestly a little weird.

Sean has found the uncanny valley without even knowing it exists.

The Effect of Movement

The second section of Mori’s article, and where I think it really gets interesting for thinking about digitized manuscripts, is The Effect of Movement. In the first section we were talking in generalities, but here we see what happens when we consider movement alongside general appearance. Manuscripts, after all, are complex physical objects, much as humans are complex physical objects. Manuscripts have multiple leaves, which are connected to each other across quires, the quires which are then bound together and, often, connected to a binding. So moving a page doesn’t just move a page, much as bending your leg doesn’t just move your leg. Turning the leaf of a manuscript might tug on the conjoined leaf, push against the binding, tug on the leaves preceding and following – a single movement provoking a tiny chain reaction through the object, and one which, with practice, we are conditioned to recognize and expect.

Mori says:

Movement is fundamental to animals— including human beings—and thus to robots as well. Its presence changes the shape of the uncanny valley graph by amplifying the peaks and valleys (Figure 2). For illustration, when an industrial robot is switched off, it is just a greasy machine. But once the robot is programmed to move its gripper like a human hand, we start to feel a certain level of affinity for it.

And here, finally, we find our zombie, at the nadir of the “Moving” line of the uncanny valley. The lowest point of the “Still” line is the Corpse, and you can see the arrow Mori has drawn from “Healthy Person” at the pinnacle of the graph down to “Corpse” at the bottom. As Mori says, “We might be glad that this arrow leads down into the still valley of the corpse and not the valley animated by the living dead.” A zombie is thus, in this proposition, an animated corpse. So what is a “dead” manuscript? What is the corpse? And what is the zombie? (I don’t actually have answers, but I think Johanna might be addressing these or similar questions in her talk)

Reservoir Dogs (not zombies)
The Walking Dead (shuffling zombies)
28 Days Later (manic zombies)

I expect most of us here have seen zombie movies, so, in the same way we’ve been conditioned to recognize how manuscripts move, we’ve been conditioned to understand when we’re looking at “normal” humans and when we’re looking at zombies. They move differently from normal humans. It’s part of the fun of watching a zombie film – when that person comes around the corner, we (along with the human characters in the film) are watching carefully. [13] Are they shuffling or just limping? [14] Are they running towards us or away from something else? It’s the movement that gives away a zombie, and it’s the movement that will give away a zombie manuscript.

 

I want to take a minute to look at a manuscript in action. This is a video of me turning the pages of Ms. Codex 1056, a Book of Hours from the University of Pennsylvania. This will give you an idea of what this manuscript is like (its size, what its pages look like, how it moves, how it sounds), although within Mori’s conception this video is more similar to a bunraku puppet than it is like the manuscript itself.

It’s a copy of the manuscript, showing just a few pages, and the video was taken in a specific time and space with a specific person. If you came to our reading room and paged through this manuscript, it would not look and act the same for you.

e-codices manuscript viewer
e-codices viewed through Mirador

Now let’s take a look at a few examples of different page-turning interfaces. The first is from e-codices, and is their regular, purpose-built viewer. When you select the next page, the opening is simply replaced with the next opening (after a few seconds for loading). The second is also e-codices, but is from the Mirador viewer, a IIIF viewer that is being adopted by institutions and that can also be used by individuals. Similar to the other viewer, when you select the next page the opening is replaced with the next opening (and you can also track through the pages using the image strip along the bottom of the window). The next example is a Bible from Swarthmore College near Philadelphia, presented in the Internet Archive BookReader. This one is designed to mimic a physical page turning, but it simply tilts and moves the image. This would be fine (maybe a bit weird) if the image were text-only, but as the image includes the edges of the text-block and you can see a bit of the binding, the effect here is very odd. Finally, my old friend Turning the Pages (a newer version than the one I complained about in my 2009 paper), which works very hard to mimic the movement of a page turning, but doing so in a way that is unlike any manuscript I’ve ever seen.

Escape by Design

In the third section of his article, Mori proposes that designers focus their work in the area just before the uncanny valley, creating robots that have lower human likeness but maximum affinity (similar to how he discussed bunraku puppets in the section on affinity, although they are on the other side of the valley). He says:

In fact, I predict that it is possible to create a safe level of affinity by deliberately pursuing a nonhuman design. I ask designers to ponder this. To illustrate the principle, consider eyeglasses. Eyeglasses do not resemble real eyeballs, but one could say that their design has created a charming pair of new eyes. So we should follow the same principle in designing prosthetic hands. In doing so, instead of pitiful looking realistic hands, stylish ones would likely become fashionable.

Floral Porcelain Leg from the Alternative Limb Project (http://www.thealternativelimbproject.com/project/floral-porcelain-leg/)

And here’s an example of a very stylish prosthetic leg from the Alternative Limb Project, which specializes in beautiful and decidedly not realistic prosthetic limbs (and realistic ones too). This is definitely a leg, and it’s definitely not her real leg.

 

In the world of manuscripts, there are a few approaches that would, I think, keep digitized manuscript presentations in that nice bump before the valley:

 

“Page turning” interfaces that don’t try to hard to look like they are actually turning pages (see the two e-codices examples above)

Alternative interfaces that are obviously not attempting to show the whole manuscript but still illustrate something important about them (for example, RTI, MSI, or 3D models of single pages). This example is an interactive 3D image of the miniature of St. Luke from Bill Endres’s Manuscripts of Lichfield Cathedral project.

Visualizations that illustrate physical aspects of the manuscript without trying to imitate them (for example, VisColl visualizations with collation diagrams and bifolia)

 

I think these would plot out something like this on the graph.

Dot’s 2018 Conception of the Uncanny Valley of Digitized Manuscripts

This is all I have to say about the uncanny valley and zombie books, but I’m looking forward to Johanna, Bridget and Angie’s contributions and to our discussion at the end. I also want to give a huge shout-out to Johanna and Bridget, to Johanna for conceiving of this session and inviting me to contribute, and both of them for being immensely supportive colleagues and friends as I worked through my thoughts about frankenbooks and zombie manuscripts, many of which, sadly, didn’t make it into the presentation, but which I look forward to investigating in future papers.

[1] M. Mori, “The uncanny valley,” Energy, vol. 7, no. 4, pp. 33–35, 1970 (in Japanese);  M. Mori, K. F. MacDorman and N. Kageki, “The Uncanny Valley [From the Field],” in IEEE Robotics & Automation Magazine, vol. 19, no. 2, pp. 98-100, June 2012. (translated into English) (https://ieeexplore.ieee.org/document/6213238/)

Using VisColl to Visualize Parker on the Web: Reports on an experiment

This is the full text of a talk I presented at the Parker on the Web 2.0 Symposium in Cambridge on March 16, 2018 (Please note addendum at the end which addresses an issue that came up in discussion later in the day.)

I want to begin my presentation by talking about interface.

DATA OVER INTERFACE

A couple of years ago I presented a keynote at a digital humanities conference on digital editing in which I made the argument that data for a project should take precedence over the interfaces used to present that data. (I stole this idea from my colleague Doug Emery, and I liked it so much, I had it put on a teeshirt).  In my talk today I want to investigate how data and interface work together, how existing interfaces can influence both the data we gather and the development of new interfaces, and some ways that we can think around existing interfaces to develop new ones (and what this in turn means for our data).

 

This is MS 433, a Miscellany copied in a number of hands from the 13th into the late 15th century. If you want to see this manuscript, you have a few different options, which you can access through the menu in the top right.

The options are: Image View, Book View, Scroll View, and Gallery View. You probably know exactly what you’ll get when you make a selection here: Image View will present you with a single image, Book View will show the book openings, also known as facing pages (as Dr Anne McLaughlin said in her introduction at the Symposium, Book View presents the images “as a book, so when I turn the pages, it looks like a book”), Scroll View will show all the page images in a continuous row that you can scroll through back and forth, and Gallery View will show all page images as thumbnails in a single page.

Each of these views serves a different purpose: Image View, Book View, and Scrolling view present the images in a size large enough to read, with slightly different methods for moving through the book, while gallery view is more like a finding tool that also gives you the ability to get the “sense” of the aesthetic contents of a book: the relative size of script and written area, distribution of illuminations or miniatures, that kind of thing (as Anne said in her introduction, in this view you can “look at the whole thing – look for initials, for something pretty to look at”). You wouldn’t read a text in the Gallery view, you would select an image from that view and then interact with that larger image (clicking on a thumbnail in the Gallery view on Parker takes you to the Image View).

I want to consider for a moment why we present digital manuscript images in these ways. Let’s start by looking at some examples of non-digital manuscript facsimiles.

The Exeter book of Old English poetry . London, Printed and Pub. for the Dean and Chapter of Exeter Cathedral by P. Lund, Humphries & Co., ltd., 1933. Limited to twelve copies, unnumbered and not for sale and two hundred and fifty copies numbered and for sale of which this is no. 182 PR 1490 .A1 1933 Special Coll Oversize (University of Arizona)

For example, here’s an opening from the 1933 Early English Manuscripts in Facsimile facsimile of the Exeter Book. The pages face each other in the manuscript (this is 65b and 66a), but they’ve been decontextualized, presented in frames and with labels underneath.

Bestiario di Peterborough. Rome: Salerno Editrice, 2004

Compare this with the Salerno Editrice edition of the Peterborough Bestiary, published in 2004, which looks very much like what I imagine the manuscript looks like (I haven’t seen it so I can’t say for sure, but it definitely looks like a manuscript, unlike the Exeter Book facsimile, which looks like pictures of pages reproduced in a modern book).

Microfilm reader and microfilm,
https://blogs.acu.edu/csart/2017/02/08/from-microfilm-to-mass-media-biblical-manuscripts-in-the-digital-age/

And here’s something that is probably familiar to many of us: Microfilm, which presents images on a long ribbon of film, which you scroll through a special machine to find whichever page you want.

 

Microfiche Reader, linked example from https://www.abc-clio.com/ODLIS/odlis_m.aspx#microfiche

Finally, there’s Microfiche, which consists of rectangles of film onto which small images of pages are presented in a grid.

I expect you can see where I’m going with this, because I’m not exactly being subtle. The options for viewing manuscripts in Parker on the Web are basically the same as they have always been. The difference is that instead of having to go to a library to check out a book or access a reader (or order a book through interlibrary loan, if your library doesn’t own it), that you can access them in your office, or at your house, at all times of day (as long as your Internet is working, and the system isn’t down).

It’s not just the Mirador Viewer (the interface that provides image access in Parker on the Web) that has these options, every online environment for viewing medieval manuscripts will have some similar setup with at least a page-turning interface and frequently a selection of the other three. E-codices is the only interface I know of that has another option: to view the front and back of a leaf at the same time, which is pretty cool (The Scroll View also shows the front and back of leaves side by side, but in e-codices you can purposefully select this view. If you know of any other interface with unique views I would be very happy to know about them).

A System: Data + Processes

Why is it the case that all manuscript libraries have basically the same interfaces? One reason is probably because, as we can see from the non-digital examples above, that’s the way we’ve always done it. We are used to seeing manuscripts as single pages, and facing pages, and scrolling pages, and galleries of pages, so that’s how we present them digitally. But it becomes a self-fulfilling prophecy. This is how we view manuscripts, so we create systems that allow us to look at manuscripts in this way. If we want to look at manuscripts in a different way, we need to build new systems. Keep in mind that in a computer system you have two things that need to work together: You need data (information presented in a format that the computer can work with), and you need processes (software or scripts that take that data and do something with it). (This is a really simplified view, of course, but I think it works pretty well)

Parker on the Web: A IIIF System

Parker on the Web, for example, is a IIIF system, so in order to function it needs IIIF Manifests, which provide metadata in a specific format in addition to links to images served in a specific way, and it needs the IIIF server to serve the images, and the IIIF APIs (or more properly, software built to work with the APIs). If any piece of this system doesn’t meet specification – if the manifest is formatted incorrectly, or the image links don’t point to a IIIIF image server, or the software doesn’t reference the APIs correctly – the system won’t work. Without both data and processes – data and processes designed to work together – you won’t have a working system.

I’m interested in creating new ways to present digitized manuscripts, and the frame I’m using is that of the manuscripts collation. Rather than displaying a digitized manuscript only as a series of images of pages arranged from beginning to end, I want to create displays that take into account pages as leaves connected to each other through the pattern of the quiring: the collation.

M. R. James, A Descriptive Catalogue of the Manuscripts in the Fitzwilliam Museum (Cambridge University Press, 1895)

Collation isn’t a new way to think about manuscripts. In his 1895 Catalog of the Fitzwilliam Museum, M. R. James wrote a description of how to collate a manuscript, and also included collation formulas in most of the manuscript descriptions (a few random examples are in the figure below).

Examples of collation formulas from M. R. James’ Fitzwilliam catalogue

The Parker on the Web also includes collation formulas.

Building a system that considers a manuscript’s quiring in the display should be possible. We have the information (in the form of collation formulas), so we should be able to build processes to act on that. But of course it’s not that simple, because although a collation formula contains the information a person might need to construct a diagram of the codex, it isn’t formatted in a way that is able to be processed by a computer. It’s not an effective piece of data for a system of the type I describe above. The collation formula isn’t data, it’s a visualization of data, just one way to express the physical collation of a manuscript among many possibilities, and which visualization you choose will depend on what you want to do with it. For example, formulas work well in library manuscript descriptions or catalog records because they are compact and textual, while diagrams might be better suited for a scholarly essay or book because they can be annotated. There are other views one could take of the same information; I’m quite fond of this synoptic chart that shows how different texts and image cycles are dispersed through this miscellany.

However it takes work (both time and effort) to write formulas and draw diagrams and build charts. This is labor that doesn’t have to be repeated! What we need isn’t a formula, but a specially formatted, data-oriented description that can be turned into many different versions for different purposes.

VisColl as a system

This is where VisColl comes in. Briefly, VisColl is a system that consists of a data model, which is basically a set of rules, that you can use as a guide to build collation models of manuscripts, and then scripts that you can use to process the collation model to generate different views of that model. We currently have three working scripts: one that generates diagrams, one that generates a presentation of leaves as conjoins (which we call the bifolia view – this view requires digitized page images) and one that generates collation formulas. 

We are currently at an in-between stage with VisColl. We had a first version of our data model, and we have developed a second version but that one doesn’t have good visualizations yet, so today I’m going to talk about our first model, but I’m happy to answer questions about the second data model later.

The prototype of VisColl took collation formulas from The Walters Art Museum in Baltimore’s Digital Walters collection and generated diagrams directly from them. It was this prototype work that convinced me that generating data out of existing formulas was a terrible idea and would never work at scale. Nevertheless, I decided that the first step in my experiment would be to attempt to parse the collation formulas in Parker on the Web and convert them into XML files following the rules for our collation models. I wrote scripts to pull the formulas out of the Parker records and got to work figuring out rules that would describe the conversion from formula to XML. As I spent a few hours on this, I was reminded of why we decided to move from processing formulas to creating new models in the first place.

The collation formulas in Parker on the Web are inconsistent (this is not a criticism of Parker on the Web – the formulas come from different catalogues created over time by many different people, with no shared guidelines. The same thing would happen in any project that combines existing catalogs). Unlike with printed books, there is no standard for manuscript collation formulas, and the formulas in Parker on the Web have a lot of variance among them, notably that some use Arabic numerals, some Roman numerals, and some letters, while some describe flyleaves as quires and some do not. Because of the inconsistency, it was very difficult to get a handle on every single thing that would need to be caught by a process in order to convert every detail of a formula into an XML model. The use of letters, Arabic numerals, and Roman numerals is one example. In order to identify quires I would need a script that would be able to interpret each of these, to recognize when a quire identified by a letter was a set of flyleaves and when not, and to be able to generate multiple quires when presented with a span of numbers or letters.

Penn Collation Modeler

It is possible that this is something that could be done, given enough time and expertise, but given my constraints I was clearly not going to be able to do it myself for this talk. So instead, I turned to the Collation Modeler, which is the tool that we use at Penn to build models from scratch. (Another implementation of VisColl is being developed as part of the Digital Tools for Manuscript Study project by the Old Books New Science Lab at the University of Toronto)

If you have a collection formula to work from its actually pretty easy to build a model for it in the Collation modeler.

MS 433 in the Penn Collation Modeler

Here is MS 433 again, in the context of the collation modeler. Here in the main manuscript page, I’ve listed out all the quires in the manuscript and you can see the number of leaves in each. Using the collation modeler I can generate multiple regular quires all at once and then modify them, or create quires one-at-a-time. Folio numbers are generated automatically, but if the manuscript is paginated I need to change the folio numbers to page numbers (formatted as two numbers separated by a dash); to make this easier I wrote a script to fix the numbering in the finished collation model rather than doing it in the modeler (pagination will be built into the system for the new data model).

MS 433 Quire 3 in the Penn Collation Modeler

Taking a look at Quire 3, we can see the list of leaves, the folio numbering (which again I can change – we can also renumber completely from any point in the manuscript, if for example the numbering skips a leaf or numbers repeat). We can also note the “mode” of a leaf – is it original to the manuscript, added, a replacement, or missing? Once the model is built in the collation modeler, I output the collation model, which is an XML file.

MS 433 output: diagrams, bifolia view, and collation formula

And then I processed this model using the existing scripts and got diagrams, bifolia view, and a collation formula (the script actually generates a set of formulas – we can generate as many different flavors of formula as we need). You will note that the formula isn’t exactly like the formula from the record.

MS 433 collation formula: comparing the James formula and the VisColl-generated formula

That’s because the first data model is very simple and doesn’t indicate advanced things like gaps or quire groupings (indicated by “gap” and || dividers in this formula). This is actually something that can be done in the second data model, so hopefully by the end of the summer we’ll be able to output something that looks more like the record formula, and also include that information with the diagrams and bifolia view.

Once I decided to forego processing the formulas, work progressed more quickly. I was able to get 21 formulas into the collation modeler within a few hours spread out over two days. In addition to referencing the formula I would also reference the folio or page numbering in the manuscript description (which describes when page numbers are missing, repeated, or otherwise inconsistent), and at times I would reference the image files too (although I did that to double-check numbering, not to seek out physical clues to collation).

Work slowed down again when I discovered while entering the data that in several cases the foliation or pagination given in the record didn’t agree with the numbering required by the given collation formula. In the Collation Modeler you specify which folio or page aligns with which leaf in a quire – there should be a 1:1 correspondence between foliated leaves or pairs of paginated pages and leaves listed in the collation model. The first time I noticed this, a manuscript with several regular quires of 8 ended up with 8 more leaves required by the formula than were accounted for in the manuscript description. I figure that the person who made the formula got caught up in the regular quires and just added an extra one to the count, so I was comfortable removing one quire of 8 from the model. It’s not always this clear, however.

A few examples of instances where foliation and collation don’t add up

I have a list of manuscripts that I was unable to make models for because of slight variations between numbers of leaves needed by the model and the number of pages or folios listed in the description. Somebody would need to sit down with the manuscripts to see if the problem is with the formula or the foliation or pagination. (This shows one of the positive side effects of the collation modeling approach that I didn’t consider when we started, it can be used as a tool in the catalogers toolkit to double-check both the collation and numbering to ensure they align).

I’ve created a website that links brief records to the Parker on the Web and the diagram/bifolia collation views, just for fun (and I’m afraid it’s not very pretty). But if you’d like to see that you can visit parkercollations.omeka.net.

Although the combined diagram/bifolia view is interesting on its own, I’m most interested in how it might be combined with the more traditional facing-page view to provide an alternative access/navigation to digitized manuscripts. I’m currently co-PI on  Bibliotheca Philadelphiensis (BiblioPhilly), a collaboration between the University of Pennsylvania, the Free Library of Philadelphia, Lehigh University, and the Philadelphia Area Consortium of Special Collections Libraries (PACSCL) and funded by the Council on Library and Information Resources. BiblioPhilly serves to digitize all the medieval manuscripts in Philadelphia written in Europe before 1600 (476 of them, not including several hundred already digitized at Penn). We have incorporated collation modeling into our cataloging workflow, and we are working with a software developer to build an interface that will make the collation information an integral part of the experience.

Here are some mock-ups that I made to pass along to the software developers so they can see what I’m thinking about, but there will be a certain amount of back and forth with them, and others in the project are involved in this, so I’m not really sure what we’ll come up with but I’m excited to see it. And the reason we can do this is because we have the data. Now we can build the processes.

There’s no reason not to incorporate collation views of some kind into the navigation options of the Parker on the Web and other IIIF collections. There would need to be a standard way to model the collation within IIIF manifests, and then add a plug-in to the IIIF image viewers that takes advantage of that new data in new and interesting ways.

I hope that our experience with integrating VisColl into BiblioPhilly from the beginning, and my experiments building models from the Parker formulas for this talk, will encourage Parker on the Web and other libraries to develop more experimental interfaces for their digitized manuscripts.

Addendum

During his presentation “The Durham Library Recreated project,” Dr. Richard Higgins from Durham University Library suggested using the Bodleian Library’s Manifest Editor, one tool in their Digital Manuscripts Toolkit, to rearrange images so they present as bifolia in the facing-page view. Here is a screenshot of the manifest for MS 433 with the first quire rearranged as bifolia:

This works on one level: if you paged through this in an interface using a Book View, you would be presented with the conjoin leaves as sheets. But it’s really just another flat list of images, presented one after the other, just in a different order than they are in the book (Edit on 3/20/2018: It’s come to my attention that this is the general approach used by the Electronic Beowulf 4.0, in that edition’s collation navigation, so if you want to try paging through manuscript images organized by bifolia you can do it there. Instructions are here; be sure to select manuscript for both sides or else it’s not possible to click the collation option). This approach doesn’t really express the structural, three-dimensional aspect of the manuscript’s collation, so it can’t be used to generate alternative views (like diagrams or formulas). I think that a manifest like this could, however, be another kind of output from a collation model, but I think for IIIF it would make more sense to make the model part of the manifest, or something standard that IIIF APIs combine with manifests, to create any number of collation-aware views.