OpenURL – an example of a publisher / vendor innovation gap?

Following on from a previous post on innovation in libraries, I’ll focus on a single technology as a first example. It gets quite specific, but should be illustrative of how the market does not always manage to quite push things forward in the best interests of the customer.

OpenURL is a standard/ technology/ product that has been in use for over ten years in libraries. If you are not familiar with the term, its a way of getting a library reader to the online full text that their institution subscribes to, going one step beyond a DOI. When launched, it was seen as an innovative solution to a real problem, and like many good ideas was the brainchild of one individual.

How OpenURL works - University of Queensland

How OpenURL works - University of Queensland

OpenURL links contain citation metadata and are passed to an institutions ‘Link Resolver’, which queries its ‘knowledgebase’ of library subscriptions and directs the reader to the appropriate full text for them, usually via a pop-up menu.

Here is a much better fuller explanation.  SFX and Serials Solutions 360 Link are popular Link Resolver products and provide central knowledgebases libraries can activate their holdings on.

OpenURL has never been massively sexy, but it continues to play a large role in online library services.

I would suggest it that it was a great idea in its day, but  has since been hampered by a lack of innovation in its current presentation and approach. Some parts of the concept probably need a real rethink.

This post looks at the current state of OpenURL from the point of view of major stakeholders.

1) Readers

Reader interaction with a link resolver is typically through a pop-up menu  (example). They tend to view this as  part of the ‘paywall’, as it is often accompanied by a proxy server or federated access login. They may see this as an additional unnecessary step (‘why can’t I just click on the link and get to what I need?’).

Link resolver menus suffers from confusing terminology, and have been hampered by feature creep. They were originally intended to offer print alternatives to full text, something that is less necessary these days. Librarians have also tried to add other helpful links, which tend not to be so. Cambridge is as guilty of this as anyone and our supplier, Ex-Libris has offered a better lighter menu we really should adopt.

To get to the menu, readers have to click on a OpenURL button by a citation (the Cambridge one is branded ‘ejournals@cambridge’). I would suggest that readers find the presence of a button even when there is no full text confusing (see point 4 below).  And why do we need to offer pop-ups in 2011?

If we could find a way to maintain the functionality of the SFX menu but in a more discreet fashion, it would be better for readers. Its’ an additional frustration step that in my opinion needs to go.

2) Library System vendors
OpenURL seems to remain on vendors’ radar as both a useful service and a profitable product. Ex-Libris seem to be folding the functionality into its new generation of resource management product. All seem to be investing in knowledge bases as management of subscription resources become more important.

Some are depreciating its importance to the end-user, Serials Solutions Summon is proposing index enriched URLs that know a customers supplier for a text and try not to bother them with a pop-up window, which seems sensible, assuming it works.

3) Librarians
Following on from the previous point, Librarians are now heavily reliant upon OpenURL knowledge bases for a variety of back office functions. Due to this, currency of information in a knowledgebase is mission critical to libraries and should be pursued as a matter of urgency. We’ve seen some improvements in publishers and vendors sharing information updating (KBART). This is welcome, but I’m sure every vendor could do more. I speak to Librarians who would want a 48 maximum turnaround on getting this data updated.

Why? because it costs us money (lots). For every week or month a knowledge base cannot reflect our ejournal holdings accurately, some of our expensive subscriptions  cannot be accessed via menus and A-Z’s, so we are effectively wasting our subs. When your ejournals budget is 6-7 figures long, this waste can add up quick.

If vendors or publishers were directly feeling this pinch rather than readers and librarians, I would imagine things may be different.

OpenURL linking itself is probably a necessary evil given the multitude of vendors and interfaces we have for A&I and full text. I doubt we will ever truly be able to offer ‘one search box to rule them all’ and will thus have to resort to’ glue technologies’ like OpenURL.  What  we really need is a better way to facilitate the linking, which brings me onto …

4) Database and ejournal publishers

OpenURL support has always been patchy with publishers. There are to my mind two major problems with the way OpenURL has been implemented by publishers.

1) Granularity of linking. Some will resolve incoming URL’s fine. Others not so. Some publishers will always drop the reader at a title page for the journal, rather than the actual article required. I assume they see OpenURL as an unwelcome form of deep-linking that bypasses their sites navigation. This impairs the experience for the end user, who assume they are getting a ‘straight to PDF’ button on the link resolvers’ menu.

2) Knowledge of customer holdings. One of my major problems with OpenURL has been the way its been supported in our subscription abstract/ citation databases, e.g.  Web of Knowledge, Scopus etc. They tend to ‘spam’ a button by every citation result and force the reader to have to click on each one to see if the library really does have a full text subscription.

OpenURL links in a citation database - present regardless of actual full text availability

If your workflow involves checking every week for new research papers on a subject, this can get tedious really quickly.

Only Google Scholar has bothered to improve on the button, by harvesting our holdings from our knowledge-base directly and only showing a link when we have the full text.  No other abstract/citation service has bothered to replicate this. Its not even innovation, just bothering to stay competitive.

A better solution

What bothers me is that it is technically trivial to do achieve this even without harvesting.

How? Using API’s and a bit of Ajax or even server-side code, we can easily step beyond the ‘openurl button’ and show holdings information from the link resolver directly in the citation results, along with some branding for the library that picks up the expensive tab for the full text.

All major link resolvers have API’s,  the OpenURL spec itself supports XML for requests and responses.

Why don’t publishers do this? My deeply cynical thought is that as most sell full text as well as citation search services, they would prefer to sell you their copy of the full text alongside the citation.

They might even see OpenURL links as a means for competitors to push their full text into  interfaces. Maybe by keeping things confusing with a generic OpenURL button,  they hope that readers may get frustrated and start paying $30 for an article (24 hours only!) instead …

To make matters worse, Scopus and Science Direct can now pull in article recommendations from the excellent Ex Libris bX service directly and display it ‘in-interface’. Great stuff, so why can’t they do the same with our full text holdings from the Ex Libris SFX link resolver API?

This would be best for readers, and thus for librarians. Not sure how it would affect publisher profit margins though.

So here we have an example of a market driven innovation falling behind,  a ‘gap’ between reader expectation/ current web technology and publisher business models. OpenURL sits uneasily between library, system vendor, publisher and the reader. In its current form, it will never truly satisfy anyone.

Advertisements

9 thoughts on “OpenURL – an example of a publisher / vendor innovation gap?

  1. Hi Ed,
    Good points, thanks.
    On a related matter, I’d like to see more vendors harvesting libraries’ holdings so that they can allow users to limit their search results to just what’s available from their library. This is one of the benefits of the “discovery services” like Summon, but they won’t replace all other vendors’ interfaces for a long time, if ever. Ovid offers this, except that it’s clunky, it doesn’t harvest the holdings data like Google Scholar does.
    http://www.ovid.com/site/products/tools/local_holdings.jsp
    Cheers,
    Laurence
    University of Bath

  2. Some great points here. As someone who is a fan of OpenURL I haven’t really considered much of this.

    put another way (and simply re-hashing what you have said, no original thoughts here), the industry needs to:
    – publishers need to have decent URL structures. Nothing fancy, just following good practice. (and all following a convention such as publisher.com/doi/[doi] would be good)

    – Publishers and vendors need to continue to work together to make those knowledgebases damn near perfect as possible. It shouldn’t be so hard for a publisher to share information about what a package contains, in an automated format.

    – A&I databases need to look to showing full text information on their website. The industry needs to work together to make this trivial

    – Link Resolvers & Universities need to improve the UI. A huge ‘get full text here’ button, and then other relevant services (recommendations etc) clearly seperate. It’s the difference between the Amazon ‘buy this’ button and the ‘the item appeared on these lists’ section. The first needs to be clear and the first call to action.

    Finally, you mention granularity. One aspect of this is how to show the full text of an article is available where the University does not subscribe to the journal but the author has paid for it to be (Gold) Open Access.

    Chris

  3. We have a halfway house where we’re pulling the SFX results through with AJAX, so no additional popup (eg. https://librarysearch.rhul.ac.uk/Record/proquest_dll_5438882111, under the ‘holdings tab at the bottom of the page ).

    I’m not instinctively fond of the Summon solution where all links are encrypted and routed through their resolver – all the knowledge, control, and logging of connections now held by an external company seems like a hostage to fortune, and moving against the current tide towards openness, not with it. If it turns out that it does actually work better that will be a hard line to defend, though.

    Graham (Royal Holloway)

  4. Graham, thanks thats awesome. Its exactly what I want to see in Web of Knowledge, not the button!

    Chris, good ideas there, distilling it down to the Amazon ‘click here to buy’ button is not a bad argument.

    And Laurence, yes, good idea. I saw that feature and walked away from it. Manually reloading holdings? Why?

  5. Interesting stuff.

    When I first worked with SFX I was guilty of feeling the ‘pop-up menu’ was an opportunity to push a load of ‘useful’ links at the user – but it turns out that people really really want full-text – with other stuff just being an unwanted distraction.

    Having seen the light, I’d now recommend suppressing the menu where possible. I’d argue even where the library supports multiple routes to the full-text, the majority of users aren’t interested in knowing there are 2 or more routes – just give them the full text.

    I did some work with Oxford on the Sir Louie project http://blogs.oucs.ox.ac.uk/sirlouie/ where they implemented a DAIA compliant interface http://www.gbv.de/wikis/cls/DAIA_-_Document_Availability_Information_API on top of SFX, and I wrote a parser to display the results on demand using the Juice framework http://juice-project.org/ (basically jQuery) – a way of avoiding the dreaded button+menu stuff.

    In the end it feels like we are struggling with a broken approach – it focuses on the institution and not the individual user. For power users there would be real additional value in a browser plugin (like LibX) to offer user configurable options – e.g. drawing on multiple resolvers, assigning preferred platforms. Taking it even further why not some dedicated s/w (a ‘workbench’?) which integrates various configurable functionality and other tools (e.g. Zotero) …. but of course while this might serve a small number of power users, most will stick with the plain old browser – which causes us problems all over the place (e.g. authN/Z)

    • Owen, the DAIA looks like a great solution and suitably generic for wider use.

      One of the criticisms I’ve seen elsewhere of OpenURL is that it has no real application outside of “library world”, one of the reasons its never been taken up. Making it work with image based content in a repository might be a challenge.

      This library-specific problem fits in with the institutional-centric approach which you describe. The whole process needs to be user-focused.

  6. I’m very much in favour of avoiding doing things in a library specific way and embracing the wider world – we are certainly guilty of devising sector specific solutions to problems.

    However, there is a real challenge to recognising where it is legitimate to take a niche approach, and where it is not. Perhaps even more of a challenge to move away from a niche approach when the world catches up with problems we have previously solved – e.g. the problem with z39.50 is not that it was developed, but that we are still using it.

    I suspect OpenURL may fall into the ‘legitimately niche’ category – it solves the problem of identical content being published in multiple places with varying pre-arranged conditions of access. While I’m sure this isn’t entirely unique to the library world I can’t think of obvious equivalents in the world at large.

    However, this is starting to change. Music subscription services mean that each individual may have a preference to where they go to listen to music that others mention. The same is starting to become true for films, and I can’t see that books will escape the ‘subscription for flexible access’ model. This will bring the problem OpenURL was designed to solve (so called ‘appropriate copy problem’) to the masses, so it will be very interesting to see what happens.

    • Except that OpenURL was only designed to handle scholarly publishing (book, article etc). It has no provision for multimedia right now. Its Spinoff citation microdata format Coins / Z39.88 is a case in point. It breaks conceptually whenever you try and force DVD content into it.

      I like this ‘the problem with z39.50 is not that it was developed, but that we are still using it.’. It could apply to a lot of things. We are rubbish at dropping standards in favour of better universal ones.

  7. I suspect librarians are using OpenURL to address a problem that for many users doesn’t exist, namely “local copy”. At my institution access to journals is governed by IP address, if I’m on campus I see the subscribed journals (off campus I need to authenticate, then I get access). Local copy is also less relevant in an age of open access, and authors putting PDFs online. Ultimately I don’t care where the copy comes from, I just want it (Google Scholar is your friend).

    That said, I use OpenURL extensively, but as an API to discover identifiers for articles, or locate articles in digital archives, using services such as CrossRef, or ones I’ve developed locally. OpenURL has some disadvantages in this context — despite a lengthy and hideous spec document, nobody thought to specify what OpenURL should return!

    But the use I find most compelling is “just in time” linking (see “When Shall We Link?” http://go-to-hellman.blogspot.com/2010/04/when-shall-we-link.html). I’m building databases of 100,000’s of articles, many of which currently lack a widely used identifier such as a DOI, but they may one day acquire one. I can use OpenURL to encode enough information to find the article until the time that it gets a stable identifier.

    Oh, if you’re looking for an example of OpenURL and images, see “Introducing djatoka: A Reuse Friendly, Open Source JPEG 2000 Image Server” http://dx.doi.org/10.1045/september2008-chute

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s