Ivy Guo and James Bagshaw, Victoria University of Wellington
James evaluates collections by looking at usage stats. Mostly online as 97% of collection budget is for electronic resources.
Combine everything, gather usage statistics, determine a cost-benefit, and seek feedback – this is a cycle. They mostly use COUNTER reports where available. Can look at usage on the book/journal level, and also on the article/chapter level. Database reports include multimedia stats (where the title-level ones don’t). “Investigations” are where someone’s looked at eg the metadata, abstract, preview. The “request” is for full-text view/download (or in the case of multimedia, where the item is played for at least 10 seconds. Lots of info at the COUNTER website.
Good practice – standardised reports, good records management, and combine data into one place as much as possible. Excel does provide a lot of power to do this eg with pivot tables and formulae. (“It’s not my best friend but I’m on quite good terms with it.”)
How do we evaluate Open Access? 80% of OA usage is currently not tracked (according to COUNTER). “Global Item Report” might help track this so can measure not just institution usage but also wider “world” usage.
Ivy points out “there is such thing as too much data”. How do we read it all and find the useful thread in it to tell the story. You need to have your questions in mind first then look at the data. It also needs to be presented in a meaningful way to stakeholders.
In decision-making: “Be bold” which doesn’t mean reckless, but we’ve got the skills, we’ve done the consultation, so “trust the data and trust the process”.
James notes data often presents more questions than answers. Sometimes you see usage stats are low and discover your authentication system isn’t set up correctly!