Monthly Archives: October 2019

You are what you count #anzreg2019

You are what you count
Rachelle Orodio & Megan Lee, Monash University

Very often we count what’s easy to count, rather than what’s meaningful. Created a project starting with identifying what metrics they should collect.

Principles: metricsshould be strategic, purposeful, attributable, systematic, consistent, accurate, secure and accessible, efficient, integrated. Wanted to reflect key library activities.

Identified 35 metrics – 18 were manually recorded into Google Forms, Qualtrics and other temporary storage. All needed to be pulled into one place so it could be cross-referenced, and data visualisations created. Data only valuable if it can be used and shared.

Looked at Tableau, Splunk, Power BI (uni-preferred for use with data warehouse), Excel, OpenRefine, Google Data Studio.

Data sources: Alma/Primo analytics, Google analytics, EZproxy, Figshare, Libcal/LibGuides, the people counter, and custom software, spreadsheets, forms, manual recording. Quarterly email for collection of manual data.

Dashboard in Tableau with eg number of searches in Primo, how many searches produce zero results. Usage of discussion rooms vs availability. Tableau provides sophisticated visualisations, integrates with lots of sources and is great for large datasets. But expensive annual fees, needs a server environment to share reports securely, and not as easy to use as PowerBI.

Power BI example showing reference queries. Easy to learn and most functionality available in free version; full control over the layout; changes reflected immediately from one graph to another eg when you filter to one library. Sharing interactive version, the other person needs a license – or thousands of dollars for a cloud computing license.

Alma Analytics FTP – used for new titles list. Create report, schedule a job, FTP, then process files, upload to LibraryThing to get bookcovers in a carousel.

Project is ongoing. Scoping is important. Lots of info you could present, have to select the key data based on target audience, their needs etc.

Harnessing Alma Analytics and R/RShiny for Insights #anzreg2019

Harnessing Alma Analytics and R/RShiny for Insights
David Lewis & Drew Fordham, Curtin University

Interactive visualisation tools useful as it lets the user choose (within parameters) what they want to see. Alma Analytics was a bit limited. Looked at products like Tableau but it’s mostly for visualisation (and expensive) albeit easy to use.  R/RShiny free to install on desktop, more of a learning curve but worth it.

Early successes:

  • in exporting Analytics -> CSV -> clean with R -> reimport into Alma. Weeding project with printouts of the whole collection was highly manual, lots of errors, seemingly endless. With R, ran logic over entire collection and could print targeted pick lists for closer investigation. Massively accelerated deselection.
  • Could also finely-tune shelving roster more finely over the semester which saved money.

Refurbishment modelling needed to create a low-use compactus collection. Created model of previous semester as if the collection had been shelved that way, to see what would actually need to be moved back and forth. Let people explore parameters. Ended up deciding that there’d be a lot of movement in and out of the open access collection and would still require a lot of staff effort – so needed to make the compactus open access, not closed access.

Getting started with Alma Analytics and Trove API. Started with documentation then experimenting. Found the only match point was the ISBN number. Record structures complex so needed to know which substructures were relevant. Created test SQL schema and started trying test queries. Next phase: took 3-4 days to get all their holdings in Trove. Then started importing into SQL database, Views were cumbersome so created a table from the view and indexed that – which proved a lot faster.

Visualisation example with

  • * number of libraries with shared holdings – in WA, interstate, or both; at university libraries, other libraries, or both; not borrowed since [date slider input].
  • * usage by call number – user can select call number range, not borrowed since, etc.

Expanded professional networks in process of making a lot of impact with their analyses

Using APIs to enhance the user experience #anzreg2019

Using APIs to enhance the user experience
Euwe Ermita

Live with Primo and Alma in 2017, and Rosetta and Adlib 2017. Trying to customise interfaces to fit user needs and reach parity with previous system.

Adlib (manuscripts, oral history and pictures catalogue) with thumbnails pointing back to Rosetta. Primo doesn’t do hierarchies well but Adlib can show collection in context. But different technology stack – dotnet while their developers were used to other techs, so had to bring in skills.

Still getting lots of feedback that experience is inconsistent between website, catalogue, collection viewer, etc. Viewers would get lost. System performance slow for large collections; downtime for many release dates.

Options:

  • do nothing (and hide from users)
  • configure out of box – but hitting diminishing returns
  • decouple user interfaces (where user interface is separate from the application, connected via web services)

Application portfolio management strategy

  • systems of record – I know exactly what I want and it doesn’t have to be unique (eg Rosetta, Alma) – longer lifespan, maintain tight control
  • systems of differentiation – I know what I want but it needs to be different from competitors (eg Primo, their own website)
  • systems of innovation – I don’t know what I want, I need to experiment (developing their own new interfaces) – shorter lifespan, disruptive thinking

But most importantly is having a good service layer in the middle.

Lots of caching so even if Alma/Primo go down can still serve a lot of content.

Apigee API management layer – an important feature is the response cache so API responses get stored ‘forever’ – cuts response time to 1/180, and cuts load on back-end systems, avoiding hitting the API limit. Also handy to have this layer if you want to make your data open as whatever system you have behind the scenes, the links you give users don’t change; can also give customised API to users (rather than giving them a key to your backend system).

MASA – Mesh App and Service Architecture. Want to get rid of point-to-point integrations as if one point changes, you have to update all your integrations. Instead just update the single point-to-mesh connection.

Have done an internal prototype release, looking at pushing out to public end of this year/early next year.

Takeaways:

  • Important to have an application strategy – use systems for their strengths (whether that’s data or usability)
  • Don’t over-customise systems of record: it creates technical debt. Every time there’s an upgrade you have to re-test, re-customise
  • Play with API mediation/management – lots of free tools out there
  • Align technology with business strategy

Primo Workflow Testing with Cypress #anzreg2019

Taskforce – Primo Workflow Testing with Cypress
Nishen Naidoo, Macquarie University

Special interest working group on interoperability has restructured with a focus on taskforces focusing on specific community issues.  First one has been Primo Workflows – Lee Houghton is project leader, 18 people involved. Working on:

  • workflow requirements gathering (documentation)
  • workflow testing implementation (coding for automatic testing)

Manual testing takes time – there’s more and more to test, more and more often, and less and less time. This means we’re forced to only test the most vital things while other things slip off the radar – especially accessibility.

What if we remove the “manual” from testing, using cypress.io? Cypress is intended for testing web applications – uses JavaScript for writing tests and popular testing frameworks under the hood (Mocha and Chai). With Cypress Scenario Recorder you can do your test in a web browser and record it, like a macro.

Cmd.exe. You need Node.js installed. Then in empty directory
> npm install cypress
> npx cypress open
This sets up example files and tests. Four folders in the cypress directory: fixtures (static config files, eg Primo url, username/password etc), integration (example tests), plugins (for extending functionality), and support (commands – lets you package up steps in a task).

Looking at integration tests – to run something just click and it goes and runs a whole series of tests at lightning speed. Test sets up context which groups everything together. BeforeEach() gets triggered before each test (eg to open a fresh Primo page). it() is a test with a bunch of actions eg type content into a field and test that it’s there, click on different parts of the page, get specific parts of the DOM. If we don’t get what we expect, Cypress tells you the test failed.

As well as just saving you time because it’s so fast, you can schedule it to run in the background and just notify you if something’s broken.

eg
cy.get(“searchBar”).type(“economics journal”);  //types economics journal into the search bar
cy.get(“div.search-actions”).click(); //clicks the search button

After running it (and seeing the result) you can ‘time-travel’ to hover on each command and see at what the browser looked like at that stage.

One downside is you can’t change out of one domain – this is a big problem with testing single sign on which relies on a lot of transitions between domains – gets a cross origin error. Makes it hard to test things that rely on a user being logged in. How single sign-in works:

Identity Provider <—-pre-configured certificates—> Service Provider (Primo PDS)
^—————————–user————————————–^

All communication between the two goes through the user so we can simulate that using Cypress Request Agent. Fixtures with urls and passwords. Before() runs before all tests (is the login), then beforeEach() (goes to a new Primo page, then function to test if the username shows in the menu.

Q: Aim to share these tests with the community?
A: Yes. 🙂

Central Discovery Index #anzreg2019

Central Discovery Index Working Group
Erin Montagu on behalf of Sonja Dunning, University of Western Australia

CDI is to replace both PCI and Summon Index. Hundreds of millions of records. Will be integrated into Alma so activation workflows simplified, faster update cycles, merged records and new search algorithm. UWA library hopes to gain enhanced discovery and operational efficiencies – hoped joining working group would let them influence development and avoid pitfalls.

Moving from one system to another not always one-to-one. Testing to make sure CDI activations covered all Alma activations to start with; later to make sure search/discovery works as expected. Findings:

  • local collections weren’t mapped – may have to change how these are set up in Alma
  • duplicate collections – Ex Libris is investigating
  • some collections not in CDI – hopefully addressed by final rollout
  • inaccurate result counts – hopefully addressed by final rollout

More testing in progress re search/ranking/full-text rights/record display. Then analysis and development of maintenance guidelines.

Preview:

  • A new facet for CDI search activation; CDI info displaying on collection record.
  • Can “(De)Activate for Search in CDI” in electronic collection view – much easier, but lots of information eg about what the collection contains won’t be migrated which will make troubleshooting harder. (Have provided this feedback but haven’t heard a response.)
  • Can search on CDI fulltext rights, linking etc.
  • CDI tab added to collection record with activation status.
  • In Primo, “multiple sources exist” becomes a merged record.
  • More records in search results due to “improved” search algorithm – don’t know how this works
  • More resource types (including data_sets) (more info on Knowledge Centre: “Resource types in CDI“)
  • More features to be added

Individual switchover Mar-Jun 2020, general switchover (of all remaining customers) July.

For more info from working group: cdi_info@exlibrisgroup.com

Creating actionable data using Alma Analytics #anzreg2019

Beyond the numbers: creating actionable data using Alma Analytics dashboard
Aleksandra Petrovic, University of Auckland

Using analytics to inform relegation of print resources (to off-site storage) and retention (on main shelves).

Alma analytics lets you create very detailed reports but a fair amount of work, especially with data cleaning and analysing to get 100% accuracy. A lower accuracy option using the dashboard would be much quicker. Visualisations they used included:

  • Overview by subject view showed how many items no usage, low usage, medium usage, high usage in different subjects based on checkout history.
  • Overview of usage by publication year bands
  • Overview of usage of possible duplicates in different subjects
  • Overview weeding reports that could be more closely investigated
  • Overview of books needing preservation
  • Quick stats eg monographs count, zero uses, low uses, over 10 years old, possible duplicates – per library

Weeding parameters:

  • publication year
  • Alma usage
  • accession year
  • historical usage
  • possible duplicates

(Other libraries might also consider value, authorship (eg by own institution’s authors), theses (irreplaceable), donations/bequests.)

Different methodology types eg soft methodology would give a number of “soft retain”, “soft relegate”. Could improve with weighted indexes among other options.

Q: Will you share reports in community area?
A: Yes, though some are very specific to Auckland so can’t promise they’ll automatically work.

Q: Are you using Greenglass with this approach?
A: Using this by itself.

Q: Ex Libris have released some P&E duplication reports – how do you approach risk if an electronic item is in an aggregator collection (and might disappear…)?
A: Excluded all electronic items from dashboard as it needs more information about subscribed vs owned. This is a next step…

 

Achieving self-management using Leganto #anzreg2019

DIY for academics: a ‘how to’ guide for achieving self-management using Leganto
Kirstie Nicholson, University of Western Australia

“Self-management” of reading lists meaning unit coordinators creating, editing and submitting reading lists. Gives them autonomy and is efficient for library.

Previously lists were submitted via email; library would create in Alma course reserves (and liaise with unit coordinators) and students had access via Primo. This was always meant to be temporary but became permanent with age. Fully library managed so due to work involved was limited to essential items only. Had low usage and felt this was due to limited functionality. Highly inefficient to process or to monitor.

New model aimed to encourage and support self-management; allow student access via LMS (Blackboard); allow non-essential items; have liaison librarians rather than list processing staff liaise with coordinators. Knew coordinators wouldn’t want to learn a new system and would be busy to self-manage so would want library to keep managing things and wouldn’t use Leganto. So retained a library-managed list option with some restrictions (as last resort, essential readings, and only using basic Leganto functionality).

Started with 10-unit pilot, then went to full implementation in 2018. Branded it as “unit readings” (name chosen by pilot participants) and rolled over existing lists.

97% (215) of lists were self-managed in 2018 – reviewed, submitted, published by coordinators (with assistance available). In S1 2019 99.5% of lists – only one was library-managed. Very good feedback from coordinators re ease of use, intuitive, easy to integrate, fast, responsive. Why did it go so well?

  • Pilot provided real champions speaking up in support of it, and great comments in survey from both staff and survey which helped promote it. Also a confidence boost for library staff, affirming the model. In pilot could do one-on-one training which taught a lot about the needs for the system, which could then use in the implementation.
  • Functionality was a big leap up. Built to encourage academics to use it eg auto-complete which encourages self-management behaviour.
  • All-library approach on the project. Library management buy-in so all staff invested. Roles well-delineated, staff confident in benefits, well-equipped/trained to support coordinators.
  • Messaging emphasis that it was a university-supported project tying into uni strategy/goals (not just library); not paperwork but part of preparing for unit; benefits for academics and students.
  • Used old approaches as a cue for new opportunities eg when received an email list used it as an opportunity to meet coordinator and show them the new system.

Challenges

  • Publishing: meant to be academics’ responsibility but they often neglected this step and needed lots of followup. From Semester 2 library will take over this responsibility (which is easy) and change messaging to focus on getting academics to switch on LTI.
  • Full engagement with interface: they’d come in, create list, but not return to look at student interactions or add readings
  • Using more self-management functionality: haven’t opened up rollover, etc
  • Support content: what level of support content to provide, how to provide info needed without creating a whole manual. Ex Libris content doesn’t always match their workflows.
  • Transitioning off old system: a third of lists haven’t migrated so need to find out why (eg maybe it’s no longer taught).
  • Uneven use across faculties: both of Leganto and of the LMS.

Future plans to address these:

  • Student benefits are main motivator for academics to transition so want to use analytics more to demonstrate this
  • Targeted communications: define groups of users/non-users and target messaging appropriately; also target based on time of year
  • Support model: communicate this better.
  • Educational enhancement unit: work with this team and target early career educators
  • Usability testing

Q: How did you link Leganto introduction to university goals?
A: Mostly in the realm of engagement librarians at teaching and learning committees. Sent bulletpoints with them. Eg how it ties into uni educational strategy, student retention etc.

 

What do users want from Primo? #anzreg2019

What do users want from Primo? Or how to get the evidence you need to understand user behaviour in Primo.
Rachelle Orodio & Megan Lee, Monash University

Survey users about most important services. #4 is LibrarySearch letting them use it quickly; #9 is off-campus access. Feedback that LibrarySearch is “very slow and bulky”, accessing articles “takes way to many steps”, search results “hard to navigate”, “links don’t work”.

Project with strategic objectives, success factors, milestones, etc.

Started by gathering data on user behaviour – Primo usage logs, Primo/Alma analytics, Google analytics. Ingested into Splunk. Got a large dataset: http://tinyurl.com/y5k4nzr4 

How users search:

  • 90% start on basic screen, and 98% use the “All resources” scope (not collections, online articles, popular databases) – basically using the defaults.
  • Only 15% sign in during a session. 51% click on availability statement, 45% click on ViewIt links. Sometimes filter by facets, rarely sort. Don’t display reviews or use tags; don’t browse, don’t use lateral or enrichment links. Little take up of citation export, save session/query, add to eShelf, etc.
  • Most searchers are 2-4 words long. 69% less than 7 words – 14% longer than 50 words! 1.13% of searches are unsuccessful

Two rounds of user testing. Splunk analytics -> designed two views (one similar to classic, one stripped down) and ran think-aloud tests on 10 students using these views, along with pre-test and post-test surveys. Results classified into: user education, system changes, system limitations.  System changes were made and testing rerun with another group of students. Testing kits at https://tinyurl.com/y4fgwhhx

Surveys:

  • Searching for authoritative information – start at Google Scholar and databases, only go to Primo if hit a paywall.
  • Preferred the simplified view. Said that most useful: advanced search, favourites, citation link to styles – but this wasn’t borne out by observations
  • Liked the “Download now” (LibKey I think) feature and wanted it everywhere

Observations:

  • only sign in if they need to eg to check loans, read articles. So want to educate users and enable auto-login
  • Only a few individuals use advanced search
  • don’t change the scope – renamed scopings and enabled auto-complete
  • prefer a few facets – simplified list of facets
  • don’t change sorting order – changed location and educating
  • want fewer clicks to get full text
  • not familiar with saved queries – needs education

Put new UI in beta for a couple of month, ran roadshows and blog communications. Added a Hotjar feedback widget into the new UI. Responses average at 2.3 rating out of 5 – hoping that people happy with things aren’t complaining. Can see that people are using facets, Endnote desktop and citation links; labels on item page.

Feedback themes – mostly searching, getIt and viewIt access.

Q: You want to do more user education – have you done any anything on education at point-of-need ie on Primo itself?
A: Redesigning Primo LibGuide, investigating maybe creating a chatbot. Some subject librarians are embedded in faculty so sometimes even involved in lectures.

“Primo is broken, can you fix it?” #anzreg2019

“Primo is broken, can you fix it?”: Converting anecdotal evidence into electronic resource access in Alma and Primo
Petrina Collingwood, Central Queensland University

Combined library and IT service. EZproxy/ADFS authentication.

Problem: implemented quickly in late 2016; in 2017 received lots of reports of broken links (derived from unforeseen consequences of config choices – including moving EBSCOhost auth from EZproxy to SSO). Limited staff resources to troubleshoot. New Digital Access Specialist position created to fix issue.

Approach: sought examples of issues, devised a plan to systematically check P2E records, check parsers, check static URLs correct, etc

Cause of errors: multifarious! Incorrect metadata in PCI or target databases or Alma, configuration of parsers or electronic service linking or EZproxy config, limitations of EBSCO link resolver, incorrect availability/coverage, links not proxied in Primo.

Major problems: EBSCOhost links; EZproxy not enabled on some collections; EZproxy config stanzas not maintained; standalone portfolio static URLs not maintained.

Fixed: 15,000 Kanopy standalone portfolios not proxied so moved into a collection for easy solve. Reduced EZproxy stanzas by 63%. All Ebscohost collections had major issues.

Late 2017 moved EBSCOhost from EZproxy to SSO for convenience of students, but

  • Alma’s generated link didn’t work as it didn’t use the authtype or custid parameters. The authtype parser parameter wasn’t configurable – opened a case and Alma fixed this.
  • EBSCO link resolver plugin gives more accurate links but again missing authtype and custid parameters so didn’t work off campus. Integration profile in Alma contains the API user ID but no username/password to access CQU account so Ex Libris just pulled back generic URLs which wouldn’t work. Ex Libris wouldn’t fix this.
  • EBSCO link resolver – when plugin not enabled, OpenURL is used, but gives errors any time volume or issue numbers not available. EBSCO had no plans to fix this. Workaround was to create a dynamic URL in the Electronic Service. Gigantic code with four IF statements covers most situations….  Problems with URLENCODE function so lots of issues whenever diacritics, copyright symbols etc. Ex Libris has this in development. Also has no access to jkey parameter which is a problem for regional newspapers where the Alma title doesn’t match the EBSCO title.

Essentially the solutions to the problems caused more problems….

Remaining possible solutions:

  • Go back to EZproxy (unfortunately lots of links in Moodle)
  • Go OpenAthens (probably worse than going back to EZproxy)
  • Unsubscribe to EBSCO (tempting but not practical)
  • Do nothing (use current FAQ workaround)
  • After URLENCODE bug fixed, turn off EBSCO link resolver plugin in Alma and implement Dynamic URLs
  • When URLENCODE bug fixed, ask Alma to use Dynamic URLs for everything

“No full text error” problem – caused because a PNX record might have 3 ISBNs and Alma 2 ISBNs, one matches so we get “full text available” – but then OpenURL only sends one ISBN and if it’s not in Alma it returns “No full text error”. Ex Libris says this is pending development.

Most issues demystified and many even solved. Still working to resolve some.

Q: Would it get anywhere if a bunch of libraries get together to lobby EBSCO to fix their link resolver?
A: Maybe; not sure of their reasons for hesitating.

A briefing from Ex Libris #anzreg2019

A briefing from Ex Libris on new initiatives and topics
Melanie Fitter, Professional Services Director APAC region, Ex Libris

Alma UI – looking at painpoints, have various systemwide tools in progress. Feedback messages have been released. Working on improving menus, choosing multiple facets, improving configuration of mapping and code tables. Working on accessibility.

Metadata Editor – has always been a source of complaint, so working on navigation, accessibility of tools, more easily working between different areas, add records to an editing queue from search. Some features will start trickling through starting from the end of the year. Old will be gone around mid-2020.

Known Issues Operator role gives access to known issues list which shows high-priority issues/bugs with a fix date (quarter or the specific monthly release) associated with them. So can search there then either be reassured or create your own case.

CDI – lots of benefits with latest hardware architecture, faster update cycle, single-activation, merged records instead of grouped records. About 92% permalinks will work after the move… In Alma there’ll be a new CDI tab on collection records. Rollout – Alma/PrimoVE moving Q4 2019 – Q2 2020, while Primo/SFX moving Q1 2020 – Q4 2020

COUNTER 5 – hopefully testing done by end of year and launched to everyone by Jan 2020. Both COUNTER 4 and 5 running in parallel until 4 phased out ‘eventually’. In the vendor record > Usage data tab can add a SUSHI account and choose whether it’s 4 or 5.

Provider Zone content loading – currently vendors provide Ex Libris data, and ExL does normalisation etc etc then it’s loaded – this causes a bottleneck. So providing a place where vendors can upload their own content. Based on their APIs, letting vendors load content straight into collections/portfolios/MARC straight from their own data.  This should be seamless from library’s point of view – on the collection we’ll just see eg “Managed by IEEE Xplore”. 5 intro partners: Proquest, IEEE, Sage, Alexander Street Press, Taylor and Francis – in production at start of 2020.

Resource Sharing – looking at creating a new next-gen system. Focus on patron value, library efficiency, shared holding index; also including search for libraries nearby. Estimate of due time. General release end 2020 and at end 2021 ability to add non-Alma institutions as lenders/borrowers.

DARA – plan to add recommendations for high demand item purchase, cataloguing (eg duplicates), more e-collection portfolio.

Next Gen analytics – moving towards data visualisation using OBI 12.