Tag Archives: anzreg2019

Round-up of #anzreg2019 sessions

ANZREG = the Australia / New Zealand Ex Libris User Group (the acronym is historic). This covers topics related to Alma, Primo, Leganto, Esploro, etc etc.

I was (not heavily) involved in organising the conference, and moderated the developers’ day, and my main takeaway from this is that if you have the option to pay $$$ for AV support during a conference, pay it: it’s worth every single cent to have someone there who’s responsible for the mics and livestreaming and remote presentations, and let you focus on the people and timekeeping and stuff.

Day 1

  • I made a terrible strategic decision not to liveblog the keynote “Libraries at the Edge of Reality”. Keynotes are often hard to liveblog and this would have been too but I regret not writing down the first point of Jeff Brand’s “Manifesto for Civilising Digitalisation”. It was – after talking about the respect people have for physical libraries and other spaces; about the grief people feel when eg Notre Dame burnt because they’ve got an emotional connection to it – about making a virtual/digital space that would deserve that same feeling and respect. It left me wondering what kind of website does this? The closest I can think of is Wikipedia maybe?
  • Predicting Student Success with Leganto – library joined an Ex Libris pilot project to see if it’d be possible to predict student success/failure based on reading list interactions. Some limited success but lots of false positives/false negatives. Would need lots more data, and lots of discretion if planning any intervention based on the results.
  • Understanding user behaviour and motivations – turned on “expand my results” by default and got a large increase in interloan requests, especially from first-time users/undergraduates. Big usability improvement.
  • Aligning project milestones to development schedules – introduced Leganto in multiphase project, making various bugfix/enhancement requests along the way
  • Exploring Esploro – had a very unintegrated repository/CRIS system built on manual processes. Esploro eliminates much of this double-handling, has automagic harvesting etc. Researcher still needs to upload full-text themselves but system sends emails.
  • A national library perspective on Almalots of original cataloguing which Alma isn’t strong in. Numerous challenges around this and born-digital items; various workarounds found. Make heavy use of templates.
  • “It should just work”: access to library resources – sponsor presentation on LibKey products which is essentially a redesigned link resolver plugin thing. Possibly a bit heavy reliance on DOIs and PDFs which limits how often it’ll be successful but it’s early days for the product and they seem keen to expand the cases where it’ll work.
  • A briefing from Ex Libris – upcoming improvements to MetaData Editor, CDI, COUNTER 5, Provider Zone content loading, next gen resource sharing, next gen analytics

Day 2

  • “Primo is broken, can you fix it?” – linking issues from Primo. Lots to do with EBSCOhost (partly including a move from EZproxy to SSO for authentication). Also discussed the infamous “No full text error” problem which Ex Libris apparently says is in development.
  • What do users want from Primo?  – very detailed talk on getting evidence on how users use Primo, and what improvements to make as a result. Includes links to survey kits and dataset of analytics.
  • Achieving self-management using Leganto  – Very successful implementation. Started with a small pilot project which helped finetune how they sold it, built their own confidence, and created champions among their userbase. But ultimately seems like their faculty just really like the product (even if they’re not yet using all the functionality). Library is retaining some functions in their control eg rollover.
  • Creating actionable data using Alma Analytics – using various dashboard visualisations to inform a large weeding project. Will share reports in community area.
  • Central Discovery Index – update on CDI from the libraries testing it. Testing only partway through. Some issues found, Ex Libris investigating these. Switchover is planned by July for all customers.

Developers’ Day

  • Primo Workflow Testing with Cypress – I’ve long liked the idea of automated testing, but figured I didn’t have the skills to set it up. With Cypress, which uses JavaScript… I just might. The time is another matter but I think I want to explore it as it could be useful for a lot more systems than just Primo, and give us early warning when things break (instead of us finding out days later when someone gets around to using and/or reporting it).
  • Using APIs to enhance the user experience – using the APIs to create their own user interface over the top of their various Ex Libris products for consistency, usability, robustness (by caching so it covers downtime better). Big investment of time! But makes sense in their context.
  • Harnessing Alma Analytics and R/RShiny for Insights – RShiny for interactive visualisation. Learning curve but powerful (and free!) Their talk showed some cool use cases.
  • You are what you count – another really detailed talk, basic theme being to be strategic about what you count – make metrics fit your strategy, not dictate it.
  • The fight against academic piracy – Splunk with EZproxy data to automate blocking users who fit a pattern of excessive/abnormal downloads. Some false positives but easily resolved and generally results in positive and constructive conversations.
  • rss2oai for harvesting WordPress into Primo – this was my talk, slides not yet live and I obviously didn’t liveblog 🙂 but the code is at https://github.com/LincolnUniLTL/rss2oai At the last minute this morning I realised that I hadn’t included a section on what it actually looks like for users as a result, so hurriedly edited that in; during the session someone asked if we had analytics on how it was used which is another massive oversight I should rectify sometime When I Have Time (and can overcome my hatred of Google Analytics).

The fight against academic piracy #anzreg2019

UniSA Library and the fight against academic piracy
Sam Germein, University of South Australia

Previous method for monitoring abuse of EZproxy was cumbersome and prone to error.

Next used Splunk. Could get a top 10 downloaders; do a lookup on usernames etc. Reduced time to look for unauthorised access, but vendors would still contact them outside of business hours, and block access to the EZproxy for server for potentially the whole weekend.

Splunk has a notification function – looking into how to use this.

Eg a report if a username logging in from three countries or more. (Two countries turned up lots of false positives due to VPNs.) Alerts got sent to Sam by email. Could then block the username.

Looked into other ways it might be more accurate. Still potential situation where student in a country where access was blocked and VPN needed. Added database info to see if they’re hopping between lots of databases, and how much content they’re downloading. All this info built into dashboards so needed to reverse engineer them and get the info into his report.

Another issue – in the weekend getting alerts on phone where couldn’t view spreadsheet. But Splunk could embed the info in the email.

Extended emails to other team members and to their help desk software to log a formal job and make it part of the business workflow. Got IT Helpdesk involved.

Still getting false positives, so looked into only sending the alert if downloaded more than 25MB. Refine how info displayed for wider range of people managing it.

Increased frequency to every 6 hours.

Using API could directly write the username to the EZproxy deny file – fully automating the block process. Still getting some false positives but much more on the front foot – they see alerts and contact vendor rather than vice versa.

Still lots more to do. Still implementing EZproxy 6.5 and experimenting with the EZproxy blacklist which helps.

Q: How did you decide the parameters?
A: Mostly trial and error, trying to strike a balance between legitimate blocks and false positives. Decided to be reasonably strict.

Q: Have you had any feedback from vendors?
A: Not specifically, but have had a reduction of contacts from vendors about issues.

Q: Have you had feedback from false positives blocked?
A: No, put a note in the deny file. [Another audience member’s had some conversations, students are usually good and good opportunity to hear how they’re using resources.]

You are what you count #anzreg2019

You are what you count
Rachelle Orodio & Megan Lee, Monash University

Very often we count what’s easy to count, rather than what’s meaningful. Created a project starting with identifying what metrics they should collect.

Principles: metricsshould be strategic, purposeful, attributable, systematic, consistent, accurate, secure and accessible, efficient, integrated. Wanted to reflect key library activities.

Identified 35 metrics – 18 were manually recorded into Google Forms, Qualtrics and other temporary storage. All needed to be pulled into one place so it could be cross-referenced, and data visualisations created. Data only valuable if it can be used and shared.

Looked at Tableau, Splunk, Power BI (uni-preferred for use with data warehouse), Excel, OpenRefine, Google Data Studio.

Data sources: Alma/Primo analytics, Google analytics, EZproxy, Figshare, Libcal/LibGuides, the people counter, and custom software, spreadsheets, forms, manual recording. Quarterly email for collection of manual data.

Dashboard in Tableau with eg number of searches in Primo, how many searches produce zero results. Usage of discussion rooms vs availability. Tableau provides sophisticated visualisations, integrates with lots of sources and is great for large datasets. But expensive annual fees, needs a server environment to share reports securely, and not as easy to use as PowerBI.

Power BI example showing reference queries. Easy to learn and most functionality available in free version; full control over the layout; changes reflected immediately from one graph to another eg when you filter to one library. Sharing interactive version, the other person needs a license – or thousands of dollars for a cloud computing license.

Alma Analytics FTP – used for new titles list. Create report, schedule a job, FTP, then process files, upload to LibraryThing to get bookcovers in a carousel.

Project is ongoing. Scoping is important. Lots of info you could present, have to select the key data based on target audience, their needs etc.

Harnessing Alma Analytics and R/RShiny for Insights #anzreg2019

Harnessing Alma Analytics and R/RShiny for Insights
David Lewis & Drew Fordham, Curtin University

Interactive visualisation tools useful as it lets the user choose (within parameters) what they want to see. Alma Analytics was a bit limited. Looked at products like Tableau but it’s mostly for visualisation (and expensive) albeit easy to use.  R/RShiny free to install on desktop, more of a learning curve but worth it.

Early successes:

  • in exporting Analytics -> CSV -> clean with R -> reimport into Alma. Weeding project with printouts of the whole collection was highly manual, lots of errors, seemingly endless. With R, ran logic over entire collection and could print targeted pick lists for closer investigation. Massively accelerated deselection.
  • Could also finely-tune shelving roster more finely over the semester which saved money.

Refurbishment modelling needed to create a low-use compactus collection. Created model of previous semester as if the collection had been shelved that way, to see what would actually need to be moved back and forth. Let people explore parameters. Ended up deciding that there’d be a lot of movement in and out of the open access collection and would still require a lot of staff effort – so needed to make the compactus open access, not closed access.

Getting started with Alma Analytics and Trove API. Started with documentation then experimenting. Found the only match point was the ISBN number. Record structures complex so needed to know which substructures were relevant. Created test SQL schema and started trying test queries. Next phase: took 3-4 days to get all their holdings in Trove. Then started importing into SQL database, Views were cumbersome so created a table from the view and indexed that – which proved a lot faster.

Visualisation example with

  • * number of libraries with shared holdings – in WA, interstate, or both; at university libraries, other libraries, or both; not borrowed since [date slider input].
  • * usage by call number – user can select call number range, not borrowed since, etc.

Expanded professional networks in process of making a lot of impact with their analyses

Using APIs to enhance the user experience #anzreg2019

Using APIs to enhance the user experience
Euwe Ermita

Live with Primo and Alma in 2017, and Rosetta and Adlib 2017. Trying to customise interfaces to fit user needs and reach parity with previous system.

Adlib (manuscripts, oral history and pictures catalogue) with thumbnails pointing back to Rosetta. Primo doesn’t do hierarchies well but Adlib can show collection in context. But different technology stack – dotnet while their developers were used to other techs, so had to bring in skills.

Still getting lots of feedback that experience is inconsistent between website, catalogue, collection viewer, etc. Viewers would get lost. System performance slow for large collections; downtime for many release dates.

Options:

  • do nothing (and hide from users)
  • configure out of box – but hitting diminishing returns
  • decouple user interfaces (where user interface is separate from the application, connected via web services)

Application portfolio management strategy

  • systems of record – I know exactly what I want and it doesn’t have to be unique (eg Rosetta, Alma) – longer lifespan, maintain tight control
  • systems of differentiation – I know what I want but it needs to be different from competitors (eg Primo, their own website)
  • systems of innovation – I don’t know what I want, I need to experiment (developing their own new interfaces) – shorter lifespan, disruptive thinking

But most importantly is having a good service layer in the middle.

Lots of caching so even if Alma/Primo go down can still serve a lot of content.

Apigee API management layer – an important feature is the response cache so API responses get stored ‘forever’ – cuts response time to 1/180, and cuts load on back-end systems, avoiding hitting the API limit. Also handy to have this layer if you want to make your data open as whatever system you have behind the scenes, the links you give users don’t change; can also give customised API to users (rather than giving them a key to your backend system).

MASA – Mesh App and Service Architecture. Want to get rid of point-to-point integrations as if one point changes, you have to update all your integrations. Instead just update the single point-to-mesh connection.

Have done an internal prototype release, looking at pushing out to public end of this year/early next year.

Takeaways:

  • Important to have an application strategy – use systems for their strengths (whether that’s data or usability)
  • Don’t over-customise systems of record: it creates technical debt. Every time there’s an upgrade you have to re-test, re-customise
  • Play with API mediation/management – lots of free tools out there
  • Align technology with business strategy

Primo Workflow Testing with Cypress #anzreg2019

Taskforce – Primo Workflow Testing with Cypress
Nishen Naidoo, Macquarie University

Special interest working group on interoperability has restructured with a focus on taskforces focusing on specific community issues.  First one has been Primo Workflows – Lee Houghton is project leader, 18 people involved. Working on:

  • workflow requirements gathering (documentation)
  • workflow testing implementation (coding for automatic testing)

Manual testing takes time – there’s more and more to test, more and more often, and less and less time. This means we’re forced to only test the most vital things while other things slip off the radar – especially accessibility.

What if we remove the “manual” from testing, using cypress.io? Cypress is intended for testing web applications – uses JavaScript for writing tests and popular testing frameworks under the hood (Mocha and Chai). With Cypress Scenario Recorder you can do your test in a web browser and record it, like a macro.

Cmd.exe. You need Node.js installed. Then in empty directory
> npm install cypress
> npx cypress open
This sets up example files and tests. Four folders in the cypress directory: fixtures (static config files, eg Primo url, username/password etc), integration (example tests), plugins (for extending functionality), and support (commands – lets you package up steps in a task).

Looking at integration tests – to run something just click and it goes and runs a whole series of tests at lightning speed. Test sets up context which groups everything together. BeforeEach() gets triggered before each test (eg to open a fresh Primo page). it() is a test with a bunch of actions eg type content into a field and test that it’s there, click on different parts of the page, get specific parts of the DOM. If we don’t get what we expect, Cypress tells you the test failed.

As well as just saving you time because it’s so fast, you can schedule it to run in the background and just notify you if something’s broken.

eg
cy.get(“searchBar”).type(“economics journal”);  //types economics journal into the search bar
cy.get(“div.search-actions”).click(); //clicks the search button

After running it (and seeing the result) you can ‘time-travel’ to hover on each command and see at what the browser looked like at that stage.

One downside is you can’t change out of one domain – this is a big problem with testing single sign on which relies on a lot of transitions between domains – gets a cross origin error. Makes it hard to test things that rely on a user being logged in. How single sign-in works:

Identity Provider <—-pre-configured certificates—> Service Provider (Primo PDS)
^—————————–user————————————–^

All communication between the two goes through the user so we can simulate that using Cypress Request Agent. Fixtures with urls and passwords. Before() runs before all tests (is the login), then beforeEach() (goes to a new Primo page, then function to test if the username shows in the menu.

Q: Aim to share these tests with the community?
A: Yes. 🙂

Central Discovery Index #anzreg2019

Central Discovery Index Working Group
Erin Montagu on behalf of Sonja Dunning, University of Western Australia

CDI is to replace both PCI and Summon Index. Hundreds of millions of records. Will be integrated into Alma so activation workflows simplified, faster update cycles, merged records and new search algorithm. UWA library hopes to gain enhanced discovery and operational efficiencies – hoped joining working group would let them influence development and avoid pitfalls.

Moving from one system to another not always one-to-one. Testing to make sure CDI activations covered all Alma activations to start with; later to make sure search/discovery works as expected. Findings:

  • local collections weren’t mapped – may have to change how these are set up in Alma
  • duplicate collections – Ex Libris is investigating
  • some collections not in CDI – hopefully addressed by final rollout
  • inaccurate result counts – hopefully addressed by final rollout

More testing in progress re search/ranking/full-text rights/record display. Then analysis and development of maintenance guidelines.

Preview:

  • A new facet for CDI search activation; CDI info displaying on collection record.
  • Can “(De)Activate for Search in CDI” in electronic collection view – much easier, but lots of information eg about what the collection contains won’t be migrated which will make troubleshooting harder. (Have provided this feedback but haven’t heard a response.)
  • Can search on CDI fulltext rights, linking etc.
  • CDI tab added to collection record with activation status.
  • In Primo, “multiple sources exist” becomes a merged record.
  • More records in search results due to “improved” search algorithm – don’t know how this works
  • More resource types (including data_sets) (more info on Knowledge Centre: “Resource types in CDI“)
  • More features to be added

Individual switchover Mar-Jun 2020, general switchover (of all remaining customers) July.

For more info from working group: cdi_info@exlibrisgroup.com

Creating actionable data using Alma Analytics #anzreg2019

Beyond the numbers: creating actionable data using Alma Analytics dashboard
Aleksandra Petrovic, University of Auckland

Using analytics to inform relegation of print resources (to off-site storage) and retention (on main shelves).

Alma analytics lets you create very detailed reports but a fair amount of work, especially with data cleaning and analysing to get 100% accuracy. A lower accuracy option using the dashboard would be much quicker. Visualisations they used included:

  • Overview by subject view showed how many items no usage, low usage, medium usage, high usage in different subjects based on checkout history.
  • Overview of usage by publication year bands
  • Overview of usage of possible duplicates in different subjects
  • Overview weeding reports that could be more closely investigated
  • Overview of books needing preservation
  • Quick stats eg monographs count, zero uses, low uses, over 10 years old, possible duplicates – per library

Weeding parameters:

  • publication year
  • Alma usage
  • accession year
  • historical usage
  • possible duplicates

(Other libraries might also consider value, authorship (eg by own institution’s authors), theses (irreplaceable), donations/bequests.)

Different methodology types eg soft methodology would give a number of “soft retain”, “soft relegate”. Could improve with weighted indexes among other options.

Q: Will you share reports in community area?
A: Yes, though some are very specific to Auckland so can’t promise they’ll automatically work.

Q: Are you using Greenglass with this approach?
A: Using this by itself.

Q: Ex Libris have released some P&E duplication reports – how do you approach risk if an electronic item is in an aggregator collection (and might disappear…)?
A: Excluded all electronic items from dashboard as it needs more information about subscribed vs owned. This is a next step…

 

Achieving self-management using Leganto #anzreg2019

DIY for academics: a ‘how to’ guide for achieving self-management using Leganto
Kirstie Nicholson, University of Western Australia

“Self-management” of reading lists meaning unit coordinators creating, editing and submitting reading lists. Gives them autonomy and is efficient for library.

Previously lists were submitted via email; library would create in Alma course reserves (and liaise with unit coordinators) and students had access via Primo. This was always meant to be temporary but became permanent with age. Fully library managed so due to work involved was limited to essential items only. Had low usage and felt this was due to limited functionality. Highly inefficient to process or to monitor.

New model aimed to encourage and support self-management; allow student access via LMS (Blackboard); allow non-essential items; have liaison librarians rather than list processing staff liaise with coordinators. Knew coordinators wouldn’t want to learn a new system and would be busy to self-manage so would want library to keep managing things and wouldn’t use Leganto. So retained a library-managed list option with some restrictions (as last resort, essential readings, and only using basic Leganto functionality).

Started with 10-unit pilot, then went to full implementation in 2018. Branded it as “unit readings” (name chosen by pilot participants) and rolled over existing lists.

97% (215) of lists were self-managed in 2018 – reviewed, submitted, published by coordinators (with assistance available). In S1 2019 99.5% of lists – only one was library-managed. Very good feedback from coordinators re ease of use, intuitive, easy to integrate, fast, responsive. Why did it go so well?

  • Pilot provided real champions speaking up in support of it, and great comments in survey from both staff and survey which helped promote it. Also a confidence boost for library staff, affirming the model. In pilot could do one-on-one training which taught a lot about the needs for the system, which could then use in the implementation.
  • Functionality was a big leap up. Built to encourage academics to use it eg auto-complete which encourages self-management behaviour.
  • All-library approach on the project. Library management buy-in so all staff invested. Roles well-delineated, staff confident in benefits, well-equipped/trained to support coordinators.
  • Messaging emphasis that it was a university-supported project tying into uni strategy/goals (not just library); not paperwork but part of preparing for unit; benefits for academics and students.
  • Used old approaches as a cue for new opportunities eg when received an email list used it as an opportunity to meet coordinator and show them the new system.

Challenges

  • Publishing: meant to be academics’ responsibility but they often neglected this step and needed lots of followup. From Semester 2 library will take over this responsibility (which is easy) and change messaging to focus on getting academics to switch on LTI.
  • Full engagement with interface: they’d come in, create list, but not return to look at student interactions or add readings
  • Using more self-management functionality: haven’t opened up rollover, etc
  • Support content: what level of support content to provide, how to provide info needed without creating a whole manual. Ex Libris content doesn’t always match their workflows.
  • Transitioning off old system: a third of lists haven’t migrated so need to find out why (eg maybe it’s no longer taught).
  • Uneven use across faculties: both of Leganto and of the LMS.

Future plans to address these:

  • Student benefits are main motivator for academics to transition so want to use analytics more to demonstrate this
  • Targeted communications: define groups of users/non-users and target messaging appropriately; also target based on time of year
  • Support model: communicate this better.
  • Educational enhancement unit: work with this team and target early career educators
  • Usability testing

Q: How did you link Leganto introduction to university goals?
A: Mostly in the realm of engagement librarians at teaching and learning committees. Sent bulletpoints with them. Eg how it ties into uni educational strategy, student retention etc.

 

What do users want from Primo? #anzreg2019

What do users want from Primo? Or how to get the evidence you need to understand user behaviour in Primo.
Rachelle Orodio & Megan Lee, Monash University

Survey users about most important services. #4 is LibrarySearch letting them use it quickly; #9 is off-campus access. Feedback that LibrarySearch is “very slow and bulky”, accessing articles “takes way to many steps”, search results “hard to navigate”, “links don’t work”.

Project with strategic objectives, success factors, milestones, etc.

Started by gathering data on user behaviour – Primo usage logs, Primo/Alma analytics, Google analytics. Ingested into Splunk. Got a large dataset: http://tinyurl.com/y5k4nzr4 

How users search:

  • 90% start on basic screen, and 98% use the “All resources” scope (not collections, online articles, popular databases) – basically using the defaults.
  • Only 15% sign in during a session. 51% click on availability statement, 45% click on ViewIt links. Sometimes filter by facets, rarely sort. Don’t display reviews or use tags; don’t browse, don’t use lateral or enrichment links. Little take up of citation export, save session/query, add to eShelf, etc.
  • Most searchers are 2-4 words long. 69% less than 7 words – 14% longer than 50 words! 1.13% of searches are unsuccessful

Two rounds of user testing. Splunk analytics -> designed two views (one similar to classic, one stripped down) and ran think-aloud tests on 10 students using these views, along with pre-test and post-test surveys. Results classified into: user education, system changes, system limitations.  System changes were made and testing rerun with another group of students. Testing kits at https://tinyurl.com/y4fgwhhx

Surveys:

  • Searching for authoritative information – start at Google Scholar and databases, only go to Primo if hit a paywall.
  • Preferred the simplified view. Said that most useful: advanced search, favourites, citation link to styles – but this wasn’t borne out by observations
  • Liked the “Download now” (LibKey I think) feature and wanted it everywhere

Observations:

  • only sign in if they need to eg to check loans, read articles. So want to educate users and enable auto-login
  • Only a few individuals use advanced search
  • don’t change the scope – renamed scopings and enabled auto-complete
  • prefer a few facets – simplified list of facets
  • don’t change sorting order – changed location and educating
  • want fewer clicks to get full text
  • not familiar with saved queries – needs education

Put new UI in beta for a couple of month, ran roadshows and blog communications. Added a Hotjar feedback widget into the new UI. Responses average at 2.3 rating out of 5 – hoping that people happy with things aren’t complaining. Can see that people are using facets, Endnote desktop and citation links; labels on item page.

Feedback themes – mostly searching, getIt and viewIt access.

Q: You want to do more user education – have you done any anything on education at point-of-need ie on Primo itself?
A: Redesigning Primo LibGuide, investigating maybe creating a chatbot. Some subject librarians are embedded in faculty so sometimes even involved in lectures.