Primo Workflow Testing with Cypress #anzreg2019

Taskforce – Primo Workflow Testing with Cypress
Nishen Naidoo, Macquarie University

Special interest working group on interoperability has restructured with a focus on taskforces focusing on specific community issues.  First one has been Primo Workflows – Lee Houghton is project leader, 18 people involved. Working on:

  • workflow requirements gathering (documentation)
  • workflow testing implementation (coding for automatic testing)

Manual testing takes time – there’s more and more to test, more and more often, and less and less time. This means we’re forced to only test the most vital things while other things slip off the radar – especially accessibility.

What if we remove the “manual” from testing, using cypress.io? Cypress is intended for testing web applications – uses JavaScript for writing tests and popular testing frameworks under the hood (Mocha and Chai). With Cypress Scenario Recorder you can do your test in a web browser and record it, like a macro.

Cmd.exe. You need Node.js installed. Then in empty directory
> npm install cypress
> npx cypress open
This sets up example files and tests. Four folders in the cypress directory: fixtures (static config files, eg Primo url, username/password etc), integration (example tests), plugins (for extending functionality), and support (commands – lets you package up steps in a task).

Looking at integration tests – to run something just click and it goes and runs a whole series of tests at lightning speed. Test sets up context which groups everything together. BeforeEach() gets triggered before each test (eg to open a fresh Primo page). it() is a test with a bunch of actions eg type content into a field and test that it’s there, click on different parts of the page, get specific parts of the DOM. If we don’t get what we expect, Cypress tells you the test failed.

As well as just saving you time because it’s so fast, you can schedule it to run in the background and just notify you if something’s broken.

eg
cy.get(“searchBar”).type(“economics journal”);  //types economics journal into the search bar
cy.get(“div.search-actions”).click(); //clicks the search button

After running it (and seeing the result) you can ‘time-travel’ to hover on each command and see at what the browser looked like at that stage.

One downside is you can’t change out of one domain – this is a big problem with testing single sign on which relies on a lot of transitions between domains – gets a cross origin error. Makes it hard to test things that rely on a user being logged in. How single sign-in works:

Identity Provider <—-pre-configured certificates—> Service Provider (Primo PDS)
^—————————–user————————————–^

All communication between the two goes through the user so we can simulate that using Cypress Request Agent. Fixtures with urls and passwords. Before() runs before all tests (is the login), then beforeEach() (goes to a new Primo page, then function to test if the username shows in the menu.

Q: Aim to share these tests with the community?
A: Yes. 🙂

Central Discovery Index #anzreg2019

Central Discovery Index Working Group
Erin Montagu on behalf of Sonja Dunning, University of Western Australia

CDI is to replace both PCI and Summon Index. Hundreds of millions of records. Will be integrated into Alma so activation workflows simplified, faster update cycles, merged records and new search algorithm. UWA library hopes to gain enhanced discovery and operational efficiencies – hoped joining working group would let them influence development and avoid pitfalls.

Moving from one system to another not always one-to-one. Testing to make sure CDI activations covered all Alma activations to start with; later to make sure search/discovery works as expected. Findings:

  • local collections weren’t mapped – may have to change how these are set up in Alma
  • duplicate collections – Ex Libris is investigating
  • some collections not in CDI – hopefully addressed by final rollout
  • inaccurate result counts – hopefully addressed by final rollout

More testing in progress re search/ranking/full-text rights/record display. Then analysis and development of maintenance guidelines.

Preview:

  • A new facet for CDI search activation; CDI info displaying on collection record.
  • Can “(De)Activate for Search in CDI” in electronic collection view – much easier, but lots of information eg about what the collection contains won’t be migrated which will make troubleshooting harder. (Have provided this feedback but haven’t heard a response.)
  • Can search on CDI fulltext rights, linking etc.
  • CDI tab added to collection record with activation status.
  • In Primo, “multiple sources exist” becomes a merged record.
  • More records in search results due to “improved” search algorithm – don’t know how this works
  • More resource types (including data_sets) (more info on Knowledge Centre: “Resource types in CDI“)
  • More features to be added

Individual switchover Mar-Jun 2020, general switchover (of all remaining customers) July.

For more info from working group: cdi_info@exlibrisgroup.com

Creating actionable data using Alma Analytics #anzreg2019

Beyond the numbers: creating actionable data using Alma Analytics dashboard
Aleksandra Petrovic, University of Auckland

Using analytics to inform relegation of print resources (to off-site storage) and retention (on main shelves).

Alma analytics lets you create very detailed reports but a fair amount of work, especially with data cleaning and analysing to get 100% accuracy. A lower accuracy option using the dashboard would be much quicker. Visualisations they used included:

  • Overview by subject view showed how many items no usage, low usage, medium usage, high usage in different subjects based on checkout history.
  • Overview of usage by publication year bands
  • Overview of usage of possible duplicates in different subjects
  • Overview weeding reports that could be more closely investigated
  • Overview of books needing preservation
  • Quick stats eg monographs count, zero uses, low uses, over 10 years old, possible duplicates – per library

Weeding parameters:

  • publication year
  • Alma usage
  • accession year
  • historical usage
  • possible duplicates

(Other libraries might also consider value, authorship (eg by own institution’s authors), theses (irreplaceable), donations/bequests.)

Different methodology types eg soft methodology would give a number of “soft retain”, “soft relegate”. Could improve with weighted indexes among other options.

Q: Will you share reports in community area?
A: Yes, though some are very specific to Auckland so can’t promise they’ll automatically work.

Q: Are you using Greenglass with this approach?
A: Using this by itself.

Q: Ex Libris have released some P&E duplication reports – how do you approach risk if an electronic item is in an aggregator collection (and might disappear…)?
A: Excluded all electronic items from dashboard as it needs more information about subscribed vs owned. This is a next step…

 

Achieving self-management using Leganto #anzreg2019

DIY for academics: a ‘how to’ guide for achieving self-management using Leganto
Kirstie Nicholson, University of Western Australia

“Self-management” of reading lists meaning unit coordinators creating, editing and submitting reading lists. Gives them autonomy and is efficient for library.

Previously lists were submitted via email; library would create in Alma course reserves (and liaise with unit coordinators) and students had access via Primo. This was always meant to be temporary but became permanent with age. Fully library managed so due to work involved was limited to essential items only. Had low usage and felt this was due to limited functionality. Highly inefficient to process or to monitor.

New model aimed to encourage and support self-management; allow student access via LMS (Blackboard); allow non-essential items; have liaison librarians rather than list processing staff liaise with coordinators. Knew coordinators wouldn’t want to learn a new system and would be busy to self-manage so would want library to keep managing things and wouldn’t use Leganto. So retained a library-managed list option with some restrictions (as last resort, essential readings, and only using basic Leganto functionality).

Started with 10-unit pilot, then went to full implementation in 2018. Branded it as “unit readings” (name chosen by pilot participants) and rolled over existing lists.

97% (215) of lists were self-managed in 2018 – reviewed, submitted, published by coordinators (with assistance available). In S1 2019 99.5% of lists – only one was library-managed. Very good feedback from coordinators re ease of use, intuitive, easy to integrate, fast, responsive. Why did it go so well?

  • Pilot provided real champions speaking up in support of it, and great comments in survey from both staff and survey which helped promote it. Also a confidence boost for library staff, affirming the model. In pilot could do one-on-one training which taught a lot about the needs for the system, which could then use in the implementation.
  • Functionality was a big leap up. Built to encourage academics to use it eg auto-complete which encourages self-management behaviour.
  • All-library approach on the project. Library management buy-in so all staff invested. Roles well-delineated, staff confident in benefits, well-equipped/trained to support coordinators.
  • Messaging emphasis that it was a university-supported project tying into uni strategy/goals (not just library); not paperwork but part of preparing for unit; benefits for academics and students.
  • Used old approaches as a cue for new opportunities eg when received an email list used it as an opportunity to meet coordinator and show them the new system.

Challenges

  • Publishing: meant to be academics’ responsibility but they often neglected this step and needed lots of followup. From Semester 2 library will take over this responsibility (which is easy) and change messaging to focus on getting academics to switch on LTI.
  • Full engagement with interface: they’d come in, create list, but not return to look at student interactions or add readings
  • Using more self-management functionality: haven’t opened up rollover, etc
  • Support content: what level of support content to provide, how to provide info needed without creating a whole manual. Ex Libris content doesn’t always match their workflows.
  • Transitioning off old system: a third of lists haven’t migrated so need to find out why (eg maybe it’s no longer taught).
  • Uneven use across faculties: both of Leganto and of the LMS.

Future plans to address these:

  • Student benefits are main motivator for academics to transition so want to use analytics more to demonstrate this
  • Targeted communications: define groups of users/non-users and target messaging appropriately; also target based on time of year
  • Support model: communicate this better.
  • Educational enhancement unit: work with this team and target early career educators
  • Usability testing

Q: How did you link Leganto introduction to university goals?
A: Mostly in the realm of engagement librarians at teaching and learning committees. Sent bulletpoints with them. Eg how it ties into uni educational strategy, student retention etc.

 

What do users want from Primo? #anzreg2019

What do users want from Primo? Or how to get the evidence you need to understand user behaviour in Primo.
Rachelle Orodio & Megan Lee, Monash University

Survey users about most important services. #4 is LibrarySearch letting them use it quickly; #9 is off-campus access. Feedback that LibrarySearch is “very slow and bulky”, accessing articles “takes way to many steps”, search results “hard to navigate”, “links don’t work”.

Project with strategic objectives, success factors, milestones, etc.

Started by gathering data on user behaviour – Primo usage logs, Primo/Alma analytics, Google analytics. Ingested into Splunk. Got a large dataset: http://tinyurl.com/y5k4nzr4 

How users search:

  • 90% start on basic screen, and 98% use the “All resources” scope (not collections, online articles, popular databases) – basically using the defaults.
  • Only 15% sign in during a session. 51% click on availability statement, 45% click on ViewIt links. Sometimes filter by facets, rarely sort. Don’t display reviews or use tags; don’t browse, don’t use lateral or enrichment links. Little take up of citation export, save session/query, add to eShelf, etc.
  • Most searchers are 2-4 words long. 69% less than 7 words – 14% longer than 50 words! 1.13% of searches are unsuccessful

Two rounds of user testing. Splunk analytics -> designed two views (one similar to classic, one stripped down) and ran think-aloud tests on 10 students using these views, along with pre-test and post-test surveys. Results classified into: user education, system changes, system limitations.  System changes were made and testing rerun with another group of students. Testing kits at https://tinyurl.com/y4fgwhhx

Surveys:

  • Searching for authoritative information – start at Google Scholar and databases, only go to Primo if hit a paywall.
  • Preferred the simplified view. Said that most useful: advanced search, favourites, citation link to styles – but this wasn’t borne out by observations
  • Liked the “Download now” (LibKey I think) feature and wanted it everywhere

Observations:

  • only sign in if they need to eg to check loans, read articles. So want to educate users and enable auto-login
  • Only a few individuals use advanced search
  • don’t change the scope – renamed scopings and enabled auto-complete
  • prefer a few facets – simplified list of facets
  • don’t change sorting order – changed location and educating
  • want fewer clicks to get full text
  • not familiar with saved queries – needs education

Put new UI in beta for a couple of month, ran roadshows and blog communications. Added a Hotjar feedback widget into the new UI. Responses average at 2.3 rating out of 5 – hoping that people happy with things aren’t complaining. Can see that people are using facets, Endnote desktop and citation links; labels on item page.

Feedback themes – mostly searching, getIt and viewIt access.

Q: You want to do more user education – have you done any anything on education at point-of-need ie on Primo itself?
A: Redesigning Primo LibGuide, investigating maybe creating a chatbot. Some subject librarians are embedded in faculty so sometimes even involved in lectures.

“Primo is broken, can you fix it?” #anzreg2019

“Primo is broken, can you fix it?”: Converting anecdotal evidence into electronic resource access in Alma and Primo
Petrina Collingwood, Central Queensland University

Combined library and IT service. EZproxy/ADFS authentication.

Problem: implemented quickly in late 2016; in 2017 received lots of reports of broken links (derived from unforeseen consequences of config choices – including moving EBSCOhost auth from EZproxy to SSO). Limited staff resources to troubleshoot. New Digital Access Specialist position created to fix issue.

Approach: sought examples of issues, devised a plan to systematically check P2E records, check parsers, check static URLs correct, etc

Cause of errors: multifarious! Incorrect metadata in PCI or target databases or Alma, configuration of parsers or electronic service linking or EZproxy config, limitations of EBSCO link resolver, incorrect availability/coverage, links not proxied in Primo.

Major problems: EBSCOhost links; EZproxy not enabled on some collections; EZproxy config stanzas not maintained; standalone portfolio static URLs not maintained.

Fixed: 15,000 Kanopy standalone portfolios not proxied so moved into a collection for easy solve. Reduced EZproxy stanzas by 63%. All Ebscohost collections had major issues.

Late 2017 moved EBSCOhost from EZproxy to SSO for convenience of students, but

  • Alma’s generated link didn’t work as it didn’t use the authtype or custid parameters. The authtype parser parameter wasn’t configurable – opened a case and Alma fixed this.
  • EBSCO link resolver plugin gives more accurate links but again missing authtype and custid parameters so didn’t work off campus. Integration profile in Alma contains the API user ID but no username/password to access CQU account so Ex Libris just pulled back generic URLs which wouldn’t work. Ex Libris wouldn’t fix this.
  • EBSCO link resolver – when plugin not enabled, OpenURL is used, but gives errors any time volume or issue numbers not available. EBSCO had no plans to fix this. Workaround was to create a dynamic URL in the Electronic Service. Gigantic code with four IF statements covers most situations….  Problems with URLENCODE function so lots of issues whenever diacritics, copyright symbols etc. Ex Libris has this in development. Also has no access to jkey parameter which is a problem for regional newspapers where the Alma title doesn’t match the EBSCO title.

Essentially the solutions to the problems caused more problems….

Remaining possible solutions:

  • Go back to EZproxy (unfortunately lots of links in Moodle)
  • Go OpenAthens (probably worse than going back to EZproxy)
  • Unsubscribe to EBSCO (tempting but not practical)
  • Do nothing (use current FAQ workaround)
  • After URLENCODE bug fixed, turn off EBSCO link resolver plugin in Alma and implement Dynamic URLs
  • When URLENCODE bug fixed, ask Alma to use Dynamic URLs for everything

“No full text error” problem – caused because a PNX record might have 3 ISBNs and Alma 2 ISBNs, one matches so we get “full text available” – but then OpenURL only sends one ISBN and if it’s not in Alma it returns “No full text error”. Ex Libris says this is pending development.

Most issues demystified and many even solved. Still working to resolve some.

Q: Would it get anywhere if a bunch of libraries get together to lobby EBSCO to fix their link resolver?
A: Maybe; not sure of their reasons for hesitating.

A briefing from Ex Libris #anzreg2019

A briefing from Ex Libris on new initiatives and topics
Melanie Fitter, Professional Services Director APAC region, Ex Libris

Alma UI – looking at painpoints, have various systemwide tools in progress. Feedback messages have been released. Working on improving menus, choosing multiple facets, improving configuration of mapping and code tables. Working on accessibility.

Metadata Editor – has always been a source of complaint, so working on navigation, accessibility of tools, more easily working between different areas, add records to an editing queue from search. Some features will start trickling through starting from the end of the year. Old will be gone around mid-2020.

Known Issues Operator role gives access to known issues list which shows high-priority issues/bugs with a fix date (quarter or the specific monthly release) associated with them. So can search there then either be reassured or create your own case.

CDI – lots of benefits with latest hardware architecture, faster update cycle, single-activation, merged records instead of grouped records. About 92% permalinks will work after the move… In Alma there’ll be a new CDI tab on collection records. Rollout – Alma/PrimoVE moving Q4 2019 – Q2 2020, while Primo/SFX moving Q1 2020 – Q4 2020

COUNTER 5 – hopefully testing done by end of year and launched to everyone by Jan 2020. Both COUNTER 4 and 5 running in parallel until 4 phased out ‘eventually’. In the vendor record > Usage data tab can add a SUSHI account and choose whether it’s 4 or 5.

Provider Zone content loading – currently vendors provide Ex Libris data, and ExL does normalisation etc etc then it’s loaded – this causes a bottleneck. So providing a place where vendors can upload their own content. Based on their APIs, letting vendors load content straight into collections/portfolios/MARC straight from their own data.  This should be seamless from library’s point of view – on the collection we’ll just see eg “Managed by IEEE Xplore”. 5 intro partners: Proquest, IEEE, Sage, Alexander Street Press, Taylor and Francis – in production at start of 2020.

Resource Sharing – looking at creating a new next-gen system. Focus on patron value, library efficiency, shared holding index; also including search for libraries nearby. Estimate of due time. General release end 2020 and at end 2021 ability to add non-Alma institutions as lenders/borrowers.

DARA – plan to add recommendations for high demand item purchase, cataloguing (eg duplicates), more e-collection portfolio.

Next Gen analytics – moving towards data visualisation using OBI 12.

“It should just work”: access to library resources #anzreg2019

“It should just work”: access to library resources in Discovery layers and Open Web searching
Kendall Bartsch, ThirdIron (Gold Sponsor)

Link resolvers cumbersome, often claim “full-text available” but… it’s not…. Or there are too many options so confuse users who just want the text. Perception from users that the button “almost never works”. Suggestion to “just do what SciHub does”. Various other solutions like ResearchGate, Academia.edu, Reddit, #ICanHazPDF – sharing and piracy “steal” an estimated 20% of usage from publisher sites. [Some of this because link resolver clunky, much also because people aren’t members of libraries that can afford what they want.]

ThirdIron reinvented a linking syntax LibKey. Based on:

  1. article metadata – essentially a dark archive of Crossref, but also metadata from other sources too. Vet, correct, normalise, maintain data (tells CrossRef about mistakes they find), and incorporate open access metadata including OADOI.
  2. entitlements data – tools to import holdings data across different vendors who all represent holding data differently
  3. library authentication/fulfilment

When a user requests an item, all this is put together by LibKey and results in the PDF or abstract URL.

LibKey Discovery: This can be integrated into various discovery layers including Primo: links could be “download PDF”, “view issue contents”, “read article”, etc (recently including if in html only not PDF).

LibKey Link: Want to also expand service into anywhere else you’d use an OpenURL base url, eg Google Scholar, or linking from Web of Science, reference lists etc. Can fall back to regular link resolver. (This is coming soon.)

LibKey Nomad: Linking from web searching – browser extension that can be downloaded by individual or installed on an enterprise basis.

Results:

  • increasing delivery of PDFs
  • libraries reporting fewer support tickets
  • libraries estimating savings of researcher time

A national library perspective on Alma #anzreg2019

A national library perspective on Alma
Sandra McKenzie, National Library of New Zealand

Ex Libris marketing towards higher education/academic libraries. National Library purpose (per legislation) is to “enrich the cultural and economic life of New Zealand” including by collecting, preserving and protecting documents, and making them accessible for all the people of New Zealand.

NLNZ has two categories of collections:

  1. heritage collections held permanently, in controlled environments – both physical and digital
  2. general collections, borrowable but usually also kept in stack

Legal deposit – for physical resources publishers must deposit 2 copies (one for heritage and one for general collection); for digital resources, 1 file format. If publish both, deposit both and each gets a separate record. NLNZ provides ISBNs and ISSNs, and is the national bibliographic agency for New Zealand. Bibliography scope includes not just legal deposit but anything about New Zealand from overseas. Monthly extract from Alma made available in various formats, and dataset updated quarterly.

Analogue avalanche and digital deluge. (Project to import a few terabytes of data from musicians – will be talked about at National Digital Forum.)

Migrated from Rosetta to Alma/Primo in 2018 – without systems librarians. Challenges:

  • Metadata editor – very clunky for original cataloguing which is very important for them. A “fluid approach to adhering to MARC21”. Looking forward to new metadata editor.
  • Born digital workflows – nonexistent though an Idea is under review. No digital legal deposit workflow so had to use a physical order workflow – need to be able to chase them up.
  • Deduplication – different publishing numbers for digital inventory and for physical workaround inventory – Primo dedups this. Two very different maps, contour vs cadastral, but with same title – only difference is the series number, so Primo deduped it. Could turn this off but then would have a problem with the born digital material.
  • Serials – ongoing challenge; vast parts of the collection have no item records so requesting is very difficult

Opportunities

  • Normalisation rules – use this extensively and test in Sandbox which they love. Use them record-by-record eg for creating an online record based on a print record. Also use for metadata clean-up and for bulk record creation
  • Integration with Rosetta which stores files eg for podcasts

New ways of working

  • Thinking across teams, looking at the order they do things
  • Use records from spreadsheets, lots of use of MarcEdit which had only been used by specialists previously
  • New position of “digital collecting specialist” who use norm rules, APIs, MarcEdit etc
  • Template for podcast, harvest specific details for each episode from the RSS feed to fill in the rest. More scripts manage import into Rosetta. From 2018 to 2019 big increase in machine generated records from 500 to over 2500. (People still involved but machine doing the boring parts, cataloguers choose templates and analyse subject keywords required.)

Q: How do you deal with archival materials?
A: These are handled in a separate system.

Q: Have you managed to overcome the problems with serials?
A: Work in progress….

Exploring Esploro #anzreg2019

Exploring Esploro: migration and implementation of Esploro at Southern Cross University
Margie Pembroke, Southern Cross University

Southern Cross University punches above its weight research-wise and management looking to expose that research more.

In Australia many uni repositories were created from government funding. Initial focus on green open access. Then ERA (like PBRF) came along and hijacked repositories, and then publishers hijacked open access… CAUL review of australian Repository Infrastructure found that integrations was falling behind.

Currently use bepress for their repository, and have a separate CRIS. Lots of manual reporting, data entry, reconciliation etc. No integrations, no workflows, reliant on self-reporting. Submission process: researcher gets a PDF form from the website, fills it out, and emails it to repository and to the CRIS who enter this data independently.

Early adopter of Esploro. A chance to influence the platform – financial benefits but also risks. Aim to soft launch this November, running two systems in parallel until researcher profiles and outfacing pages in Esploro look how they should.

First migration has happened – OAI-PMH export using document export, all bepress data also backed up to AWS3. Ex Libris helped with metadata mapping. Could export bepress statistics into Esploro too.

New workflow – researcher writes article, it appears on the web, Esploro finds it and surfaces in both the repository and CRIS views. Automagic harvesting based on DOI/ISBN, leverages off the Summon index, de-duplicates, links to ORCID, will look to harvest from Research Data Australia, Figshare and Unpaywall, etc.

Integrations planned with HR system (never did this for Alma – staff added ad hoc when needing to request/borrow!), uni data warehouse, uni ORCID API, CRIS, Research Data Australia, Libraries Australia, Datacite for minting DOIs.

Public interface is essentially Primo VE plus researcher profiles.  Internal workflow is in Alma. Sherpa/Romeo etc.

Benefits for library:

  • automatic harvesting / ordering (for ERA auditing) which saves work, avoids typos, means people don’t have to remember to place orders
  • staff already familiar with how Alma works
  • can add preprint, post-print, link to paper etc all sharing basic metadata
  • ease of management of research profiles
  • support from Ex Libris community

Benefits for researcher:

  • massive timesaver – only need to manually enter metadata for items not published on the internet
  • increased visibility especially with automatic generation/updating of profile
  • compliance with funding requirements

Benefits for institution:

  • comprehensive picture of research outputs across the institution
  • can leverage Alma analytics to create reports
  • avoids duplication of data-entry; increases efficiency

 

Q: How do you get full-text in?
A: Initially pulls in from bepress. For new items, sends email to researchers to ask them to upload accepted manuscript.