Tag Archives: open access

Dataset published on access to conference proceedings – thank you!

Thanks to all who’ve helped —

(Andrea, apm, Catherine Fitchett, Sarah Gallagher, Alison Fields, KNB, Manja Pieters, Brendan Smith, Dave, Hadrian Taylor, Theresa Rielly, Jacinta Osman, Poppa-Bear, Richard White, Sierra de la Croix, Christina Pikas, Jo Simons, and Ruth Lewis, plus some anonymous benefactors)

— all the conferences I was investigating have been investigated. 🙂  I’ve since checked everything for consistency and link rot, added in a set of references that I had to research myself as I couldn’t anonymise them sufficiently in the initial run; deduplicated a few more times – conference names vary ridiculously – and finally ended up with a total of 1849 conferences which I’ve now published at https://dx.doi.org/10.6084/m9.figshare.3084727.v1

The immediately obvious stats from this dataset include:

Access to proceedings

  • 23.36% of conferences in the dataset had some form of free online proceedings – full-text papers, slides, or audiovisual recordings.
  • 21.85% had a non-free online proceedings
  • 30.72% had a physical proceedings available – printed book, CD/DVD, USB stick, etc, but not including generic references to proceedings having been given to delegates
  • 45.27% had no proceedings identifiable

(Percentages don’t add to 100% as some conferences had proceedings in multiple forms.)

Access to free online proceedings by year

This doesn’t seem to have varied much over the 6 years most of the conferences took place in:

2006: 39 / 173 = 22.54%
2007: 39 / 177 = 22.03%
2008: 62 / 258 = 24.03%
2009: 63 / 284 = 22.18%
2010: 105 / 428 = 24.53%
2011: 123 / 520 = 23.65%

Conferences attended by country

Conferences attended were in 75 different countries, including those with more than 20 conferences:

New Zealand: 429
USA: 297
Australia: 286
UK: 130
Canada: 67
China: 66
Germany: 44
France: 41
Italy: 35
Portugal: 31
Japan: 29
Spain: 28
Netherlands: 27
Singapore: 25

I won’t break down access to proceedings here, because this data is inherently skewed by the nature of the sample: conferences attended by New Zealand researchers. This means that small conferences in or near New Zealand are much more likely to be included than small conferences in other parts of the world. If a small conference is less resourced to put together and maintain a free online proceedings – or conversely a large society conference is prone to more traditional (non-free) publication options – this variation by conference size/type could easily outweigh any actual variation by country. So I need to do some thinking and discussing with people to see if there’s any actual meaning that can be pulled from the data as it stands. If you’ve got any thoughts on this I’d love to hear from you!

Further analysis now continues….

Progress report on how you’ve helped my research

At this point at least 20 people have helped me look for conference proceedings (some haven’t left a name so it’s somewhere between 20 and 42), which is awesome: thank you all so much! Last week saw us pass the halfway mark, an exciting moment. As of this morning, statistics are:

  • 1187 out of 1958 conferences investigated = 59% done
  • 312 have proceedings free online (26%)
  • of those without free proceedings, 292 have non-free proceedings online
  • of those without any online proceedings, 109 have physical proceedings (especially books or CDs)
  • 472 have no identifiable proceedings (40%)

I’ve got locations for all 1958, pending some checking. Remember this is out of conferences that New Zealand researchers presented at and nominated for their 2012 PBRF portfolio.

The top countries are:
New Zealand    492
Australia    315
USA    304
UK    133
Canada    69
(with China close behind at 68)

In New Zealand, top cities are predictably:
Auckland    154
Wellington    98
Christchurch    53
Dunedin    38
Hamilton    35

Along the way I’ve noticed some things that make the search harder:

  • sometimes authors, or the people verifying their sources, made mistakes in the citation
  • or sometimes people cited the proceedings instead of the conference itself – this isn’t a mistake in the context of the original data entry but makes reconciling the year and the city difficult.
  • or sometimes their citation was perfectly clear, but my attempt to extract the data into tidy columns introduced… misunderstandings (aka terrible, terrible mistakes).
  • or we’ve ended up searching for the same conference a whole pile of times because various people call it the Annual Conference of X, the Annual X Conference, the X Annual Conference, the International Conference of X, the Annual Meeting of X, etc etc.

On the other hand I’ve also noticed some things that make the search easier – either for me:

  • having done so many, I’m starting to recognise titles, so I can search the spreadsheet and often copy/paste a line
  • when all else fails I have access to the source data, so I can look up the title of the paper if I need to figure out whether I’m trying to find the 2008 or 2009 conference.

And things that could be generally helpful:

  • if a conference makes any mention of ACM, whether in the title or as a sponsor, then chances are the proceedings are listed in http://dl.acm.org/proceedings.cfm
  • if it mentions IEEE, try http://ieeexplore.ieee.org/browse/conferences/title/  If it’s there, then on the page for the appropriate year, scroll down and look on the right for the “Purchase print from partner” link – chances are you’ll get a page with an ISBN for the print option; plus confirming the location which is harder to find on IEEEXplore itself.
  • if it’s about computer science in any way, shape or form, then http://dblp.uni-trier.de/search/ can probably point you to the source(s). This is the best way to find anything published as a Lecture Notes in Computer Science (LNCS) because Springer’s site doesn’t search for conferences very well.
  • if you do a web search and see a search result for www.conferencealerts.com, this will confirm the year/title/location of a conference, and give you an event website (which may or may not still be around, but it’s a start). Unfortunately I haven’t found a way to search the site directly for past conferences.
  • a search result for WorldCat will usually confirm year/title/location and (if you scroll down past the holding libraries) often give you the ISBN for the print proceedings.

And two things that have delighted me:

  • Finding some online proceedings in the form of a page listing all the papers’ DOIs – which resolve to the papers on Dropbox.
  • Two of the conferences in the dataset have no identifiable city/country – because they were held entirely online.

I I am of course still eagerly soliciting help, if anyone has 10 minutes here or there over the next month (take a break from the silly season? 🙂  Check out my original post for more, or jump straight to the spreadsheet.

Help me research conference proceedings and open access

I’ve been interested for a while in the amount of scientific/academic knowledge that gets lost to the world due to conference proceedings not being open access / disappearing off the face of the internet. My main question at the moment is, just how much is lost and how much is still available?

Unfortunately googling 1,955 conferences will rapidly give me RSI, so I’m hoping I can convince you to do a few for me – in the interests of science!

Background: I’ve written elsewhere about Open Access to conference literature (short version: conferences are where a huge amount of research gets its first public airing, yet conference papers are notoriously hard to track down after the fact) and Open Access and the PBRF (short version: if conference papers were all OA, PBRF verification/auditing would become a lot easier). Here I’m wanting to quantify the situation.

The data: The original dataset was sourced from TEC, from the list of conference-related NROs (nominated research outputs) from the 2012 PBRF round. There are obvious and non-obvious limitations but basically I feel this makes it a fairly good listing of conferences between 2006-2011 that New Zealand academics presented at and felt that presentation was worthy of being included among their best work for the period. The original dataset is confidential, but I’ve received permission to post a derived, anonymised dataset publically for collaborative purposes, and in due course publish it on figshare.

How you can help:
(Note: by contributing to the spreadsheet you’re agreeing to licence your contribution under a Creative Commons Zero licence, meaning anyone can later reuse it in any way with or without attribution. (Though I’ll be attributing it in the first instance – see below.))

  1. Go to the spreadsheet containing the list of conferences
  2. Pick a conference that doesn’t have any URLs/notes/name-to-credit
  3. SearchGoogle/DuckDuckGo/your search engine of choice for the conference name, year, and city to find a conference website. Assuming you find one:
  4. Correct any details that are wrong or missing: eg expand the acronym; add in missing locations; if the website says it’s the 23rd annual conference put “23” in the “No.” column, etc.
  5. Browse on the website for proceedings, list of papers, table of contents, etc. If you find:
    • a list of papers including links to the full text of each paper freely accessible, paste the URL in “Proceedings URL: free online”
    • a list of papers including links to the full text but requiring a login (including in a database or special journal issue), paste the URL in “Proceedings URL: non-free online”
    • information about offline proceedings eg a CD or book, paste the URL in “Proceedings URL/info re print/CD/etc”
    • none of the above, paste the URL of the conference website for that year in “Other URL: conference website”
  6. If you can’t find any conference website at all, write that in “Any notes” so others don’t try endlessly repeating the futile search!
  7. Sign with a “Name to credit” for your work. If you’d prefer to remain anonymous, put in n/a.
  8. If you like, return to step 2. 🙂
  9. Share this link around!

What I’ll do with it:
First I’ll check it all! And obviously I’ll pull it back into my research and finish that up. I’ll also publish the final checked dataset on figshare under Creative Commons Zero licence so others can use it in their research. I’ll acknowledge everyone who helps and provides a name, in the creation of the dataset and in the paper I’m working on. And if someone wants to do a whole pile and/or be otherwise involved in the research then talk to me about coauthorship!

Why don’t I just use…

  • Mechanical Turk: I’m boycotting Amazon, for various reasons. Plus I consider a fair price for the work would be at least US$0.50 a conference (possibly double that) and as that’s a bit harder to afford I feel more ethical being upfront about asking folk to do it for free.
  • Library assistants: I am doing this a bit but there’s a limited period where they’re still working before summer hours and things have got quiet enough that they have time.
  • Something else: Ask me, I may want to!

Other questions
Please comment or email me.

Innovations in publishing; giving control back to authors #theta2015

Innovations in publishing; giving control back to authors
Virginia Barbour, Executive Officer, Australian Open Access Support Group (ORCID)

Lovely slide comparing a title page for the 1665 Phil.Trans of the Royal Society vs a 2014 Royal Society Open Science article on the web including a YouTube movie of the subject seadragon.

What’s worked well and not-so-well? Online > free > data > attribution > authorship > open
(Difference between ‘free’ and ‘open’ is important!)

We’ve changed the philosophy. We’ve begun to understand what we can do with the web. We’ve seen an explosion of models – not just for open, but also for toll. We’ve begun to ‘harness collective intelligence’. We’ve got the technology and processes to do open access, so with Creative Commons we can clearly label what people can/can’t do with something.

So have we fixed publishing? Hmm.

We need new thinking in peer review. Example of CERN paper appearing to find faster-than-light results and putting it up on arXiv for peer review so that someone could figure out what they’d done wrong. But also post-publication peer review – ~”the terrifying thing of publishing OA is that if you’re wrong someone will tell you about it on Twitter five minutes later”. PubMed Commons

Claiming contributions and identity. Disambiguating multiple authors with same name. Technology catching up with this. Hugely empowering for especially women whose names may change pre/post marriage/divorce.

What’s the right version of an article? Can provide “CrossMark” telling you if there’s an update – even works on downloaded PDFs on your computer.

But most of the debate around open access is driven by publishers. How do authors get control? Knowledge.

Areas where wants authors to have knowledge:

  • where to publish
  • understanding peer review and the black box of publishing
  • understanding how open something is and what can be done with it (eg data mining)

Susan L Janson “research is not finished until it’s published”
Authors need to care as much about publishing as about researching.

The confusing jargon of free

I’m constantly encountering confusion about whether something is in the public domain, or whether it’s open access. And it’s no wonder, because the terminology is inherently confusing.

If someone’s heard that material in the public domain is free for the taking, why shouldn’t they think that a blogpost or a tweeted photo — material on domains that are sometimes excruciatingly public — is included in that?

If publishers have heard about how great open access is, why shouldn’t they think that making some content openly accessible on their site is worthy of press releases vaunting how awesome they are?

(That one was a trick question. Publishers shouldn’t think that because it’s their job to be informed about this stuff. When I see a publisher talking about their “open access” site while their footer continues to be blazoned with “all rights reserved”, I don’t assume they just haven’t come across a proper definition before. I assume they’re wilfully taking advantage of the confusing terminology in order to intentionally deceive people while retaining plausible deniability, and they go on my list of Do Not Trust The Evil.)

The opposite of ‘public domain’ isn’t ‘private’; it’s ‘copyrighted’. This means:

  • Material created in the 19th century and earlier is mostly in the Public Domain (even if it’s in private ownership) because the copyright has expired.
  • Material created recently is generally not in the Public Domain (even if the copyright-holder has made it public by publishing it in a book, a newspaper, a webpage, a social media post, Times Square, and/or laser-writing on the moon) but is rather protected by copyright law. This means the copyright-holder — who is often but not always the author — holds the right to decide what other places the work can or can’t be published in.

The opposite of ‘open access’ isn’t ‘unaccessible’; it’s ‘all rights reserved’.

Something that’s unaccessible can’t be open access; this is true. But being accessible isn’t sufficient. Access has to be guaranteed, either by virtue of the material being in the public domain, or by means of the copyright-holder granting an appropriate license, aka permissions, to users of the material. This allows users to share/take over responsibility for making the material accessible if the copyright-holder can no longer, or no longer wants to, do it themselves.

This is abstract and therefore potentially confusing, so let’s look at a concrete example like Chris Hadfield’s cover of “Space Oddity”. Oh wait — we can’t look at it anymore, because while it was openly accessible for a year, it was never open access. David Bowie’s representatives gave permission for the song to be used for one year, so for one year the video was accessible. But no-one ever gave viewers permission to make and upload their own copies of it to guarantee perpetual access.

(Okay, so users have nevertheless made their own copies and uploaded them all over the place. This is because, firstly, the Internet is forever, and secondly, the video is fantastic. But every single one of these copies is illegal.)

People more familiar with the scholarly publishing landscape may notice I’m almost arguing that green open access and gold open access aren’t actually open access. And you know, I’m okay with saying that an open access article which disappears from the web because the only institutional repository allowed to store it goes down; or an open access journal which suddenly decides to shut all its previously accessible content behind a paywall — that these were never actually open access.

Open access means not just knowing that it’s accessible to everyone now, but knowing that it’s allowed to be accessible to everyone in the future too.

Loyalty cards for scholarly publishing

Two things I’ve come across recently which I don’t think I’ve seen before:

“Each article published in ACS journals during 2014 will qualify the corresponding author for ACS Author Rewards article credit. Credits issued under this program, at a total value of $1,500 per publication, may be used to offset article publishing charges and any ACS open access publishing services of the author’s choosing, and will be redeemable over the next three years (2015-2017).”
American Chemical Society extends new open access program designed to assist authors

“Under [IOP’s] new programme, referees will be offered a 10% credit towards the cost of publishing on a gold open access basis when they review an article.”
Changing the way referees are rewarded

(I’m presuming, though it’s not explicit, that these credits are additive, so if you published 2 toll-access articles with ACS you’d get $3,000 credit, and if you refereed 10 IOP articles you’d get to publish 1 article on a gold open access basis for free.)

I find this fascinating. The obvious catch for scientists is the same as any loyalty card: in order to use it you’ve got to keep shopping at the same company. It’s great psychology, because humans are notoriously reluctant to ignore the opportunity for a discount, so:

  • Someone who’s got credit owing will be less likely to publish in some other journal even if the final cost-to-author is equal and even if that other journal is a better fit for the particular article. (How much less likely I don’t know, but I do think it’d be a factor.)
  • Someone who’s got credit owing for OA publication would probably be more likely to pay the extra to publish OA rather than to publish toll-access for free but not get to use that tempting credit. (This might at least have a small side-effect of getting more people experience with the benefits of publishing open access.)

Both of these are obviously what the companies in question are banking on. I’m a bit concerned about what this pressure to publish with the same old big companies will mean for science – partly about competition, as in the world of supermarkets, but also partly the journals where articles should be finding their best fit. (Perhaps the whole ‘impact factor’ issue has meant that no-one’s ever considered only subject scope in that regard, but this definitely adds another confounding factor.) But given the clear financial benefits to the companies, I expect to be seeing more scholarly publishing reward cards popping up in future.

Open Access cookies

Creative Commons Aotearoa New Zealand are running a series of blogposts for Open Access Week, and I’ve contributed Levelling up to open research data.

I also, for Reasons, had an urge tonight to make Open Access biscuits. (I know my title says ‘cookies’, but the real word is of course ‘biscuits’, and I shall use it throughout the rest of this post along with real measurements and real temperatures. Google can convert for you, should you need it to.) The following instructions I hereby license as Creative Commons Zero, which should not be taken as a reflection on their calorie count.

First I started with a standard biscuit base recipe. You could use your own. I used the base for my family’s recipe for chocolate chip biscuits, which probably means it ultimately derives from Alison Holst, but I think I’ve modified it sufficiently that it’s okay to include here:

  1. Cream 125 grams of butter and 125 grams of sugar. The longer you beat it, the light and crisper the biscuits will be.
  2. Beat in 2 tablespoons sweetened condensed milk (or just milk will do, at a pinch) and 1 teaspoon vanilla essence.
  3. Sift in 1.5 cups of flour and 1 teaspoon of baking powder and mix to a dough.

Now we diverge from the chocolate chip recipe by not adding 90 grams of chocolate chips. We also divide the mixture in half, dying one half orange by using a few drops of red colouring and three times as many drops of yellow colouring:

Open Access biscuits step 1

The plain lot should then be divided into halves, each half rolled long and flat.
The orange lot should have just a small portion taken off and rolled into a fat spaghetto (a bit thinner than I did would be ideal), and the rest rolled into a large rectangle.

Then start rolling it together into our shape. The orange spaghetto gets rolled up into one of the plain rectangles. In this photo I’m doing two steps at once – most of the orange hasn’t been properly rolled out yet:

Open Access biscuits step 2

Then roll the rest of the orange around that with enough hanging off the top that you can fit some more plain stuff in to keep the lock open:

Open Access biscuits step 3

The ends will be raggedy. Don’t worry, this is all part of the plan.

At this point, put your roll of dough into the fridge to firm up a bit while you do the dishes. You could also consider feeding the cat, cooking dinner, etc. Or you can skip this step (or shorten it as I did) and it won’t hurt the biscuits, you’ll just have to do more shaping with your fingers because cutting the slices squashes them into rectangles:

Open Access biscuits step 4

These slices are about half a centimetre thick. I got about 38 off this roll, plus the raggedy ends. Remember I said those were part of the plan? Right, now – listen carefully, because this is very important – what you need to do is dispose of all the raggedy ends that won’t make pretty biscuits by eating the raw dough. I know, I know, but somebody’s got to do it.

The rest of the biscuits you put on a tray in the oven on a slightly low setting, say 150 Celsius, while you do the dishes that you missed last time because they were under things, and generally tidy up. 10 minutes or so, but whatever you do don’t go and start reading blogs because once these start to burn they burn quickly. Take them out when the ones in the hottest part of the oven are just starting to brown, and turn out onto a cooling rack.

Et voilà, open access biscuits:

Open Access biscuits step 5

Open access and peer review

We’re likely to be hearing about John Bohannon’s new article in Science, “Who’s afraid of peer review?” Essentially the author created 304 fake papers with bad science and submitted one each to an ‘author-pays’ open access journal to test their peer review. 157 of the journals accepted it, 98 rejected it; other journals were abandoned websites or still have/had the paper under review at time of analysis. (Some details are interesting. PLOS ONE provided some of the most rigorous peer review and rejected it; OA titles from Sage and Elsevier and some scholarly societies accepted it.)

Sounds pretty damning, except…

Peter Suber and Martin Eve each write a takedown of the study, both well worth reading. They list many problems with the methodology and conclusions. (For example, over two-thirds of open access journals listed on DOAJ aren’t “author-pays” so it’s odd to exclude them.)

But the key flaw is even more obvious than the flaws in the fake articles: his experiment was done without any kind of control. He only submitted to open access journals, not to traditionally-published journals, so we don’t know whether their peer review would have performed any better. As Mike Taylor and Michael Eisen point out, this isn’t the first paper with egregiously bad science that’s slipped through Science‘s peer review process either.

LIANZA and open access

Moving to a new job has been keeping me happily preoccupied, but the email I received from LIANZA yesterday was just about calculated to spur me to break radio silence. To quote, interspersed with my commentary in [square brackets]:

From November 22, parts of the LIANZA website will be locked to members only. As the cost of developing and maintaining the website comes out of LIANZA membership fees, LIANZA Council decided to make certain pages exclusive to members. The Council worked with the Website Advisory Group to determine appropriate members-only content.

From November 22 you will need to login to the website to view these locked pages:

  • LIANZA Blog
    [Really? Does anyone really think I’ll log in to read a blog? I won’t even click a link to read a blog; I certainly don’t have time to log in to a website just to find out if there happens to be a post today. If the full post isn’t in my Google Reader, I don’t read it.]
  • Library Life newsletter features
    [I occasionally click a link from the email newsletter to read the full story. That’s about to become even more occasional.]
  • Latest issue of the New Zealand Library and Information Management Journal (NZLIMJ)
    [This implies that previous issues will remain accessible, which is something at least. But still a tremendous disappointment. I thought I’d been seeing a move towards opening NZLIMJ up, and had hoped to see it soon appear in the Directory of Open Access Journals. In the current climate, I think a library association should be promoting open access, not locking information down.]
  • Conference papers
    [!!!

    Just… What a tremendous disservice this does to the authors! Conference papers are hard enough to search as it is; locking these behind a login only guarantees that no LIANZA non-members (and not many LIANZA members) will ever read or cite these. Don’t we want rather to raise the profile of New Zealand LIS research?]
  • Copyright resources
    […Okay, if you really must have an easter egg for LIANZA members I guess this qualifies as reasonable.]
  • Member profiles
    [Okay, sure, whatever.]
  • Advocacy Portal (already restricted to members)
    [Because it’s… vitally important that only LIANZA members advocate for libraries…? To be honest I can see the argument for this as a valuable resource. I just think it’d be even more valuable if we all – members and non-members alike – cooperated on advocating for both our individual libraries and libraries as a class.]
  • Code of Practice
    [This comprises the “policy and procedures that are to be followed, day to day, in the running of the Association.” So mostly only of use to members; otoh it seems a bit odd to keep it secret.]

Does LIANZA actually have evidence that there are significant numbers of people choosing not to be members because the content’s there for free anyway? Enough people to be worth causing this hassle to existing members?

Because as a member, this does increase the hassle for me to access the content, and therefore reduces the amount of content I’ll be bothered to look up. When I was a member of the Website Advisory Group, a big concern was getting conversations going on the website; hiding those conversations away just seems likely to exacerbate that problem. This move also reduces the visibility of LIS scholarship published by LIANZA, so makes it less likely I’d consider submitting to NZLIMJ (however see footnote). And philosophically, I’m not overly happy about paying a subscription to a library association that is working against open access to information.

Lucky for LIANZA’s coffers, membership comes with other benefits that still make it worth the annual cheque. Because the moment its website content is locked behind a login screen, its value to me plummets.



Footnote for authors: If your conference paper is about to be locked behind the login screen but you actually would like other librarians nationally and internationally to have a chance at finding your research, you can deposit a copy at E-LIS – a subject repository for library and information science. (And/or in your institution’s repository if it has one.)

Likewise for NZLIMJ articles – the author guidelines state a 6 month embargo for publication elsewhere, but I emailed editor Brenda Chawner to clarify this, and she says she interprets it to apply to formal publications, not repositories, and it would be fine with her if authors put copies of their articles into an institutional or subject repository.

How libraries can buy DRM-free ebooks

Libraries hate DRM because our customers hate DRM because it makes the ebooks we buy really truly appallingly horrible to use. I can never find the cartoon when I want it, but it’s something like “How to download an ebook in 37 easy steps”. It involves lots of installation of software and restarting of the computer and logging in to things and troubleshooting, and the final step is to give up and look for it on BitTorrent. (ETA: As per Andromeda’s comment, here’s the cartoon.)

But what can we do when publishers require DRM before they sell anything to us?

Well, the new venture Unglue.it could change things. The idea behind Unglue.it is that:

  • author/copyright-holders pick a lump sum that they think is fair compensation for the rights to their book;
  • people who want to read the book pledge however much they want;
  • when the lump sum is reached, the book is released as a DRM-free, open-licensed ebook, free to the entire world. (If the lump sum isn’t reached, no money’s taken from your credit card.)

This is aimed at individual readers, but why shouldn’t libraries get in on the game? There are apparently some 16,000-odd public library branches in the USA: if each one of those made a one-off pledge of US$1 then American Book Award-winner Love Like Gumbo would be available to their members (and everyone else in the world) in perpetuity. That’s one heck of a cheap ebook. You can store a copy on the library server, or just link to it from the catalogue. You can print it out, if you want – as many times as you want. And you won’t have to buy it again after it’s been borrowed 26 times.

Currently Unglue.it has campaigns for five books. (If this takes off, and I’m convinced it will, there’ll be more.) If any of these books would be of interest to the members of your library, then figure out what’s a fair price (or what you can afford — whichever’s less) and then pledge just half of that from your book budget.

If you really can’t afford it (or purchasing really has to go through approved suppliers, no exceptions ever), well, then promote the campaigns to your members instead.

Or do nothing. When the books are funded, you and your members will get them for free anyway. 🙂

I just think that this is such a natural extension of our mission to use our funds wisely to provide resources to our communities that it’s hardly an extension at all. I think it’s the answer we’ve been asking for to the problem of ebooks. And I think it’s the best consortial deal ever.

So let’s go forth and Unglue!