Tag Archives: anzreg2019

“Primo is broken, can you fix it?” #anzreg2019

“Primo is broken, can you fix it?”: Converting anecdotal evidence into electronic resource access in Alma and Primo
Petrina Collingwood, Central Queensland University

Combined library and IT service. EZproxy/ADFS authentication.

Problem: implemented quickly in late 2016; in 2017 received lots of reports of broken links (derived from unforeseen consequences of config choices – including moving EBSCOhost auth from EZproxy to SSO). Limited staff resources to troubleshoot. New Digital Access Specialist position created to fix issue.

Approach: sought examples of issues, devised a plan to systematically check P2E records, check parsers, check static URLs correct, etc

Cause of errors: multifarious! Incorrect metadata in PCI or target databases or Alma, configuration of parsers or electronic service linking or EZproxy config, limitations of EBSCO link resolver, incorrect availability/coverage, links not proxied in Primo.

Major problems: EBSCOhost links; EZproxy not enabled on some collections; EZproxy config stanzas not maintained; standalone portfolio static URLs not maintained.

Fixed: 15,000 Kanopy standalone portfolios not proxied so moved into a collection for easy solve. Reduced EZproxy stanzas by 63%. All Ebscohost collections had major issues.

Late 2017 moved EBSCOhost from EZproxy to SSO for convenience of students, but

  • Alma’s generated link didn’t work as it didn’t use the authtype or custid parameters. The authtype parser parameter wasn’t configurable – opened a case and Alma fixed this.
  • EBSCO link resolver plugin gives more accurate links but again missing authtype and custid parameters so didn’t work off campus. Integration profile in Alma contains the API user ID but no username/password to access CQU account so Ex Libris just pulled back generic URLs which wouldn’t work. Ex Libris wouldn’t fix this.
  • EBSCO link resolver – when plugin not enabled, OpenURL is used, but gives errors any time volume or issue numbers not available. EBSCO had no plans to fix this. Workaround was to create a dynamic URL in the Electronic Service. Gigantic code with four IF statements covers most situations….  Problems with URLENCODE function so lots of issues whenever diacritics, copyright symbols etc. Ex Libris has this in development. Also has no access to jkey parameter which is a problem for regional newspapers where the Alma title doesn’t match the EBSCO title.

Essentially the solutions to the problems caused more problems….

Remaining possible solutions:

  • Go back to EZproxy (unfortunately lots of links in Moodle)
  • Go OpenAthens (probably worse than going back to EZproxy)
  • Unsubscribe to EBSCO (tempting but not practical)
  • Do nothing (use current FAQ workaround)
  • After URLENCODE bug fixed, turn off EBSCO link resolver plugin in Alma and implement Dynamic URLs
  • When URLENCODE bug fixed, ask Alma to use Dynamic URLs for everything

“No full text error” problem – caused because a PNX record might have 3 ISBNs and Alma 2 ISBNs, one matches so we get “full text available” – but then OpenURL only sends one ISBN and if it’s not in Alma it returns “No full text error”. Ex Libris says this is pending development.

Most issues demystified and many even solved. Still working to resolve some.

Q: Would it get anywhere if a bunch of libraries get together to lobby EBSCO to fix their link resolver?
A: Maybe; not sure of their reasons for hesitating.

A briefing from Ex Libris #anzreg2019

A briefing from Ex Libris on new initiatives and topics
Melanie Fitter, Professional Services Director APAC region, Ex Libris

Alma UI – looking at painpoints, have various systemwide tools in progress. Feedback messages have been released. Working on improving menus, choosing multiple facets, improving configuration of mapping and code tables. Working on accessibility.

Metadata Editor – has always been a source of complaint, so working on navigation, accessibility of tools, more easily working between different areas, add records to an editing queue from search. Some features will start trickling through starting from the end of the year. Old will be gone around mid-2020.

Known Issues Operator role gives access to known issues list which shows high-priority issues/bugs with a fix date (quarter or the specific monthly release) associated with them. So can search there then either be reassured or create your own case.

CDI – lots of benefits with latest hardware architecture, faster update cycle, single-activation, merged records instead of grouped records. About 92% permalinks will work after the move… In Alma there’ll be a new CDI tab on collection records. Rollout – Alma/PrimoVE moving Q4 2019 – Q2 2020, while Primo/SFX moving Q1 2020 – Q4 2020

COUNTER 5 – hopefully testing done by end of year and launched to everyone by Jan 2020. Both COUNTER 4 and 5 running in parallel until 4 phased out ‘eventually’. In the vendor record > Usage data tab can add a SUSHI account and choose whether it’s 4 or 5.

Provider Zone content loading – currently vendors provide Ex Libris data, and ExL does normalisation etc etc then it’s loaded – this causes a bottleneck. So providing a place where vendors can upload their own content. Based on their APIs, letting vendors load content straight into collections/portfolios/MARC straight from their own data.  This should be seamless from library’s point of view – on the collection we’ll just see eg “Managed by IEEE Xplore”. 5 intro partners: Proquest, IEEE, Sage, Alexander Street Press, Taylor and Francis – in production at start of 2020.

Resource Sharing – looking at creating a new next-gen system. Focus on patron value, library efficiency, shared holding index; also including search for libraries nearby. Estimate of due time. General release end 2020 and at end 2021 ability to add non-Alma institutions as lenders/borrowers.

DARA – plan to add recommendations for high demand item purchase, cataloguing (eg duplicates), more e-collection portfolio.

Next Gen analytics – moving towards data visualisation using OBI 12.

“It should just work”: access to library resources #anzreg2019

“It should just work”: access to library resources in Discovery layers and Open Web searching
Kendall Bartsch, ThirdIron (Gold Sponsor)

Link resolvers cumbersome, often claim “full-text available” but… it’s not…. Or there are too many options so confuse users who just want the text. Perception from users that the button “almost never works”. Suggestion to “just do what SciHub does”. Various other solutions like ResearchGate, Academia.edu, Reddit, #ICanHazPDF – sharing and piracy “steal” an estimated 20% of usage from publisher sites. [Some of this because link resolver clunky, much also because people aren’t members of libraries that can afford what they want.]

ThirdIron reinvented a linking syntax LibKey. Based on:

  1. article metadata – essentially a dark archive of Crossref, but also metadata from other sources too. Vet, correct, normalise, maintain data (tells CrossRef about mistakes they find), and incorporate open access metadata including OADOI.
  2. entitlements data – tools to import holdings data across different vendors who all represent holding data differently
  3. library authentication/fulfilment

When a user requests an item, all this is put together by LibKey and results in the PDF or abstract URL.

LibKey Discovery: This can be integrated into various discovery layers including Primo: links could be “download PDF”, “view issue contents”, “read article”, etc (recently including if in html only not PDF).

LibKey Link: Want to also expand service into anywhere else you’d use an OpenURL base url, eg Google Scholar, or linking from Web of Science, reference lists etc. Can fall back to regular link resolver. (This is coming soon.)

LibKey Nomad: Linking from web searching – browser extension that can be downloaded by individual or installed on an enterprise basis.

Results:

  • increasing delivery of PDFs
  • libraries reporting fewer support tickets
  • libraries estimating savings of researcher time

A national library perspective on Alma #anzreg2019

A national library perspective on Alma
Sandra McKenzie, National Library of New Zealand

Ex Libris marketing towards higher education/academic libraries. National Library purpose (per legislation) is to “enrich the cultural and economic life of New Zealand” including by collecting, preserving and protecting documents, and making them accessible for all the people of New Zealand.

NLNZ has two categories of collections:

  1. heritage collections held permanently, in controlled environments – both physical and digital
  2. general collections, borrowable but usually also kept in stack

Legal deposit – for physical resources publishers must deposit 2 copies (one for heritage and one for general collection); for digital resources, 1 file format. If publish both, deposit both and each gets a separate record. NLNZ provides ISBNs and ISSNs, and is the national bibliographic agency for New Zealand. Bibliography scope includes not just legal deposit but anything about New Zealand from overseas. Monthly extract from Alma made available in various formats, and dataset updated quarterly.

Analogue avalanche and digital deluge. (Project to import a few terabytes of data from musicians – will be talked about at National Digital Forum.)

Migrated from Rosetta to Alma/Primo in 2018 – without systems librarians. Challenges:

  • Metadata editor – very clunky for original cataloguing which is very important for them. A “fluid approach to adhering to MARC21”. Looking forward to new metadata editor.
  • Born digital workflows – nonexistent though an Idea is under review. No digital legal deposit workflow so had to use a physical order workflow – need to be able to chase them up.
  • Deduplication – different publishing numbers for digital inventory and for physical workaround inventory – Primo dedups this. Two very different maps, contour vs cadastral, but with same title – only difference is the series number, so Primo deduped it. Could turn this off but then would have a problem with the born digital material.
  • Serials – ongoing challenge; vast parts of the collection have no item records so requesting is very difficult

Opportunities

  • Normalisation rules – use this extensively and test in Sandbox which they love. Use them record-by-record eg for creating an online record based on a print record. Also use for metadata clean-up and for bulk record creation
  • Integration with Rosetta which stores files eg for podcasts

New ways of working

  • Thinking across teams, looking at the order they do things
  • Use records from spreadsheets, lots of use of MarcEdit which had only been used by specialists previously
  • New position of “digital collecting specialist” who use norm rules, APIs, MarcEdit etc
  • Template for podcast, harvest specific details for each episode from the RSS feed to fill in the rest. More scripts manage import into Rosetta. From 2018 to 2019 big increase in machine generated records from 500 to over 2500. (People still involved but machine doing the boring parts, cataloguers choose templates and analyse subject keywords required.)

Q: How do you deal with archival materials?
A: These are handled in a separate system.

Q: Have you managed to overcome the problems with serials?
A: Work in progress….

Exploring Esploro #anzreg2019

Exploring Esploro: migration and implementation of Esploro at Southern Cross University
Margie Pembroke, Southern Cross University

Southern Cross University punches above its weight research-wise and management looking to expose that research more.

In Australia many uni repositories were created from government funding. Initial focus on green open access. Then ERA (like PBRF) came along and hijacked repositories, and then publishers hijacked open access… CAUL review of australian Repository Infrastructure found that integrations was falling behind.

Currently use bepress for their repository, and have a separate CRIS. Lots of manual reporting, data entry, reconciliation etc. No integrations, no workflows, reliant on self-reporting. Submission process: researcher gets a PDF form from the website, fills it out, and emails it to repository and to the CRIS who enter this data independently.

Early adopter of Esploro. A chance to influence the platform – financial benefits but also risks. Aim to soft launch this November, running two systems in parallel until researcher profiles and outfacing pages in Esploro look how they should.

First migration has happened – OAI-PMH export using document export, all bepress data also backed up to AWS3. Ex Libris helped with metadata mapping. Could export bepress statistics into Esploro too.

New workflow – researcher writes article, it appears on the web, Esploro finds it and surfaces in both the repository and CRIS views. Automagic harvesting based on DOI/ISBN, leverages off the Summon index, de-duplicates, links to ORCID, will look to harvest from Research Data Australia, Figshare and Unpaywall, etc.

Integrations planned with HR system (never did this for Alma – staff added ad hoc when needing to request/borrow!), uni data warehouse, uni ORCID API, CRIS, Research Data Australia, Libraries Australia, Datacite for minting DOIs.

Public interface is essentially Primo VE plus researcher profiles.  Internal workflow is in Alma. Sherpa/Romeo etc.

Benefits for library:

  • automatic harvesting / ordering (for ERA auditing) which saves work, avoids typos, means people don’t have to remember to place orders
  • staff already familiar with how Alma works
  • can add preprint, post-print, link to paper etc all sharing basic metadata
  • ease of management of research profiles
  • support from Ex Libris community

Benefits for researcher:

  • massive timesaver – only need to manually enter metadata for items not published on the internet
  • increased visibility especially with automatic generation/updating of profile
  • compliance with funding requirements

Benefits for institution:

  • comprehensive picture of research outputs across the institution
  • can leverage Alma analytics to create reports
  • avoids duplication of data-entry; increases efficiency

 

Q: How do you get full-text in?
A: Initially pulls in from bepress. For new items, sends email to researchers to ask them to upload accepted manuscript.

Aligning project milestones to development schedules #anzreg2019

Moving the goal posts: aligning project milestones to development schedules
Kendall Kousek, Macquarie University

Macquarie – 45,000 students, 2000 staff. New purpose-built library opened 2011. Alma, Primo, Leganto, CampusM

Multiphase pilot to introduce Leganto from 2017 – demo to Faculty of Arts, tested with 8 volunteer unit convenors. Next another 11; next widened to other faculties, and so forth. Now have 400 reading lists in all 5 faculties.

When NERS opened made suggestions eg:

  • links to free resources – instructors expect library to embed direct link. In Leganto it goes to link resolver which ends up going to home page by default. Library were expecting to fix this, but instead instructors were trying to fix it themselves – by removing data from the citation until only the manual link would work! Badly affected enthusiasm in one faculty in particular. Requested ability to hide broken links – this resonated and was picked up.
  • duplication – previous system let you rollover copyright info; Leganto deleted all this so everything had to be re-entered and rechecked. Requested duplication would duplicate copyright data – was picked up and implemented even better than expected in some ways – but not as expected in others. Librarian rollover options different from instructor rollover options – but issue was reported and resolved.
  • on rollover instructors kept on course but not on reading list. Requested a fix, planned for this November
  • course loader for rollover remains fairly manual, automation probably not possible

Roadblock – lots of workload required in getting LTI link into Moodle. Created a custom LTI block designed by library and created by learning team. Can be added by instructor per library’s “instructor’s guide”.

Concern students might miss the reading list link in the LMS and still try searching in Primo. So used Resource Recommender in Primo – this isn’t sustainable so plan to phase it out as students get used to accessing readings via LMS.

Happy with system and fixes / improvements to it. Now able to focus on increasing usage and rolling it out further across campus.

Understanding user behaviour and motivations #anzreg2019

Understanding user behaviour and motivations when requesting resources
Jessie Donaghey, Bond University

Small research project focused on resource sharing requests.

Libraries often look at usability of getting to full-text – but we also need to make sure the process is seamless when we don’t have the full-text. At Bond have made improvements but haven’t stopped to investigate user behaviour during this.

Goal to “simplify and promote mechanisms for staff and students to request resources that they can’t find in Library Search”. Wanted to assess the service using analytics data; and understand users with a survey.

Assessing the service

Until 2015 – fairly manual system with users filling out online form manually and by the end of the process data had to be entered into four systems. No integrations, prone to typos, hard for users to even know it existed.

Up to 2018 – form automatically populated by Alma. Users could track progress through their library account. Enabled silent login. But still two extra steps to find service – including ticking on “Expand my results” which users never thought of.

End of 2018 considered “expand my results” as default. Until then it was used in 1% of sessions. Nervous about flooding results with “no full-text”. Took a sample of searches, replicated them, and found only some would have a small number of “no full-text” results, so turned it on. Between 2018 – 2019:

  • 60% increase in requests supplied
  • 94% increase in unique requesters
  • 85% increase in first time requesters – especially increase in undergrads

Small increases after going live with Alma Resource Sharing and after enabled silent login for Primo, but now a very large increase after enabling “expand my results” by default.

Understanding users

Surveyed users who’d recently received something from survey. Higher response rate from postgrads than undergrads.

Users mostly either had it recommended by library staff (especially regular users), or found link in Primo (especially those using it for the first time).

Users mostly expected article requests would take around three days (including new requesters who actually leaned towards expecting a longer delivery time). This matched with supply time reality. (May need to advertise this more so as not to put off people expecting it to take longer.)

Were users placing it for items they didn’t really need? Mostly (51%)  users needed the specific resource; 33% it complemented the resource they’d already found. (New users and undergrads had a more even split between these two.)

Did they track progress? 24% yes, 40% didn’t know they could. Regular users more likely to know, but still often chose not to – perhaps more familiar with wait time so less perceived need.

Most important perceived features were ease of placing request, then ability to place multiple requests at once. Least important was auto SMS updates.

94% extremely likely to use it again – mostly because they have to eg specialised resources; and/or impressed by efficient service.

Primo and Alma analytics reports used for this presentation are documented at http://tiny.cc/JD-ANZREG-19

Q: Any complaints about having ticked the “expand my results” box?
A: When replicating searches, especially replicated those faceted to articles/peer-reviewed to be sure. But no complaints. Maybe they just clicked the button.

Q: Any concerns about increase of usage, and being able to maintain fast turnaround?
A: Resource sharing is available to all students. Significant increase in usage and document delivery team had highest usage ever in September, still managed to maintain high turnaround. May not be sustainable (especially budget-wise) – decision for managers.

Predicting Student Success with Leganto #anzreg2019

Predicting Student Success with Leganto: a “Proof of Concept” machine learning project
Linda Sheedy, Curtin University

Early adoptors of Leganto as a reading list solution in 2015 – mainstreamed in 2017. Now 4700+ reading lists with 115,300 citations viewed 1.5million times by 42,000 students.

Ex Libris proposed a proof of concept project “to use machine learning to investigate the correlation between student success and activity with the Leganto Reading List”. Curtin had already been active using learning analytics so thought it would be a good fit.

Business need – early prediction (within 1-6 weeks) of students who’ll most likely struggle with their course.

Data:

  • student profile, grade and academic status data from Curtin – took significant time and effort to produce this, and inter-department work. Course structure and demographics are complicated.
  • Leganto usage from Ex Libris

Lots of work also combining the datasets.

Function: Ex Libris considered a number of possible algorithms – currently seems to be settling on the Random Forest algorithm but the final outcome may be a two-stage model.

So far Semester 2 2016 – Semester 2 2018. So far the algorithm has found the following features are most predictive:

  • student historical average grades
  • historical usage engineered feature
  • weighted student usage per course

  • student age
  • student usage in week 1 in relation to class

Model total accuracy is 91.9%
Recall: it catches 18.8% of students at risk
Precision: 69.44% (ie for 10 students predicted at risk, 7 actually will be) – considered high

The model clearly needs more work – but increasing recall shouldn’t be at expense of precision. More data may help along with more tweaking of algorithm.

Project has concluded; not sure where Ex Libris will take the project next or whether it’ll become a Leganto offering.

Q: What intervention did you take if any?
A: Just a closed project, all anonymised – just to see if it’d work – so no intervention during this project.

Q: Was demographic data other than age included?
A: The algorithm found itself that age was a major predictor (other demographic data was included but algorithm didn’t find it to be predictive of success).

Q: How was analysis improved?
A: At start of project hoped to prove that students would succeed if they read more. But as it went on it shifted to seeing what predicted when students would struggle.