Tag Archives: alma

Integration with the Alma Course API #anzreg2018

The Alma Course API – An Exercise in Course Integration
David Lewis

Alma Course Loader was inflexible – only runnable once a day, and doesn’t let you recover from errors. So wanted to write their own. Migrated to Alma when SOAP was available; later had to rewrite for REST API.  With the advent of Leganto the integration has become of even more importance.

Importance of API quotas and minimising frequency of calls. (Especially as the same API gateway is used by all Alma customers!) Course field mappings also important at the start. Another difficulty was course collapsing and parent-child course relationships (eg different cohorts within one course) which was important at their uni and was the hardest part to figure out. Ended up using course code for normal courses and parent course code for collapsed courses.

Discovered that even when they asked for JSON, error messages would come back as XML and crash their system – so ended up just writing their program to use XML instead of JSON.

Logging is a good debugging tool and audit trail and useful when raising jobs with Ex Libris.

Senior management often doesn’t value library contribution to course management – this is often political and requires a lot of awareness-raising among lecturers etc to get them to talk up the library to project managers.

Resource sharing partner synchronisation #anzreg2018

Managing Resource Sharing Partners in Alma
Nishen Naidoo, Macquarie University

  • Used to use VDX – external system, not transparent to end-user. But good that partners were managed centrally.
  • Alma provided single system, no additional user system integration, user experience via Primo and much richer. But partner management is up to each institution.
  • Connection options: broker (all requests via intermediary which handles billing) vs peer-to-peer
  • managing partners – contact details, and suspension status. Tricky to do this automatically so most people updating manually based on LADD webpage (AU suspension info), ILRS (AU addresses), Te Puna csv (NZ contact details), mailing lists announcements (NZ suspension announcements)
  • part 1 designed harvester to scrape data from these sources and put it into a datastore in json. Also capture and store changes eg of inst name or contact email.
  • part 2 designed sync service (API) to get data from datastore and upload to Alma. Needs your NUC symbol, an API key with read/write to resource sharing, and a configuration in Elasticsearch Index. (There’s a substantial technology stack.) Then pulls partner data from your Alma institution, and sync service creates partner records, compares with existing, updates Alma with changes.
  • future – hope to host in AWS. Wanting to get LADD/Te Puna to release data through proper API. Ideally Ex Libris would get data directly but at the moment can understand they wouldn’t want to touch it with a bargepoll.
  • documentation and download at https://mqlibrary.github.io/resource-sharing-partners-harvest/ and https://mqlibrary.github.io/resource-sharing-partners-sync/

Institutional repository in Alma Digital #anzreg2018

Optimising workflows and utilising Alma APIs to manage our Institutional Repository in Alma Digital
Kate Sergeant, University South Australia

Used DigiTool as an interim solution with a long-term plan to move to Alma. For a while used the electronic resource component to manage metadata, with a local filestore. Last year finally moved everything properly into Alma Digital.

In early stage needed to generate handles and manage files. Phase 2 – development of templated emails to enable requesting outputs from researchers. Phase 3 last year – workflow management, data validation, author management, license management….

Get records submitted directly by researchers; harvest from Web of Science and Scopus APIs combined with Alma APIs for adding bib records. Land in Alma as suppressed records – often hundreds. Try to prioritise manually submitted stuff; and easier (eg journal articles) stuff. Make sets of incoming records.

Alma native interface doesn’t always show all data needed so use their own dashboard using the Alma APIs, which pulls out the things they care about (title, pub date, author, resource type, date added). Then have canned searches (eg Google title search, DOI resolver, DOI in Scopus, ISSN in Ulrichs, prepopulated interloan form…) . Look at metadata eg for authors/affiliations (links into Alma metadata editor for actual editing; links through to public profile). License information in Alma’s licence module. Shows archiving rights, version for archiving, embargo period – with links to copyright spreadsheet and to Sherpa/Romeo.

Would often forget to un-suppress the record – so added that ability at the point of minting the handle. The handle is then put into the record at the same time; and mint a DOI where relevant (eg data for ANDS).

Finally composes email based on templates to research – eg post-print required – built in delay for the email until after the item has actually gone live which often takes 6hrs.

Dashboard also includes exception reports etc; record enhancement facility with eg WoS data; publication/person lookup.

Ex Libris company / product updates #anzreg2018

Ex Libris company update
Bar Veinstein, President Ex Libris

  • in 85 of top 100 unis; 65million api calls/month; percentage of new sales that are in cloud up from 16% in 2009 to 96% in 2017; 92% customer satisfaction
  • Pivot for exploration of funding/collaboration https://www.proquest.com/products-services/Pivot.html
  • aim to develop solutions sustainably so not a proliferation of systems for developing needs
  • looking at more AI to develop recommendation eg “high patron demand for 8 titles. review and purchase?”, “based on usage patterns, you should move 46 titles from closed stacks to open shelves?”, “your interloans rota needs load balancing, configure now?”, “you’ve got usage from vendors who provide SUSHI accounts you haven’t set up yet, do that now?”, algorithms around SUSHI vs usage.
  • serious about retaining Primo/Summon; shared content and metadata
  • Primo VE – realtime updates. Trying to reduce complexity of Primo Back Office (pipes etc – but unclear what replaces this when pipes are “all gone”)
  • RefWorks not just for end user but also aggregated analytics on cloud platform. Should this be connected/equal to eshelf on Primo?
  • Leganto – ‘wanting to get libraries closer to teaching and learning’ – tracking whether instructors are actually using it and big jumps between semesters.
  • developing app services (ux, workflow, collaboration, analytics, shared data) and infrastructure services (agile, multi-tenancy, open apis, metadata schemas, auth) on top of cloud platform – if you’ve got one thing with them very quick to implement another because they already know how you’re set up.
  • principles of openness: more transactions now via api than staff direct action.
  • https://trust.exlibrisgroup.com/
  • Proquest issues – ExL & PQ passing the customer service buck, so to align this. Eg being able to transfer support cases directly across between Salesforce instances.

Ex Libris prodct presentation
Oren Beit-Arie, Ex Libris Chief Strategy Officer

  • 1980s acquisitions not part of library systems -> integrated library systems
  • 2000s e-resource mgmt not part of ILS -> library services platform (‘unified resource mgmt system’)
  • now teaching/learning/research not part of LSPs -> … Ex Libris’s view of a cloud ‘higher education platform’
  • Leganto
    – course reading lists; copyright compliance; integration with Alma/Primo/learning management system
    – improve teaching and learning experience; student engagement; library efficiency; compliance; maximise use of library collections
    – Alma workflows, creation of OpenURLs…
  • Esploro
    – in dev
    – RIMs
    – planning – discovery and analysis – writing – publication – outreach – assessment
    – researchers (publish, publish, publish); librarians (provide research services); research office (increase research funding/impact)
    – [venn diagram] research admin systems [research master]; research data mgmt systems [figshare]; institutional repositories [dspace]; current research information systems [elements]
    – pain points for rseearchers: too may systems, overhead, lack of incentive, hard to keep public profile up to date
    – for research office – research output of the uni, lack of metrics, hard to track output and impact, risk of noncompliance
    – next gen research repository: all assets; automated capture (don’t expect all content to be in repository); enrichment of metadata
    – showcase research via discovery/portals; automated researcher profiles; research benchmarks/metrics
    – different assets including creative works, research data, activities
    – metadata curation and enrichment (whether direct deposit, mediated deposit, automatic capture) through partnerships with other parties (data then flows both ways, with consent)
    – guiding principles: not to change researchers’ habits; not to create more work for librarians; not to be another ‘point solution’ (interoperable)
    – parses pdf from upload for metadata (also checks against Primo etc). Keywords suggested based on researcher profile
    – deposit management, apc requests, dmp management etc in “Research” tab on Alma
    – allows analytics of eg journals in library containing articles published by faculty
    – tries to track relationships with datasets
    – public view essentially a discovery layer (it’s very Primo NewUI with bonus document viewer – possibly just an extra view) for research assets – colocates article with related dataset
    – however have essentially ruled research administration systems out of scope as starting where their strength is. Do have Pivot however.

E-resource usage analytics in Alma #anzreg2018

“Pillars in the Mist: Supporting Effective Decision-making with Statistical Analysis of SUSHI and COUNTER Usage Reports
Aleksandra Petrovic, University of Auckland

Increasing call for evidence-based decision making in combination with rising importance of e-resources (from 60% -> 87% of collection in last ten years), in context of decreasing budget and changes in user behaviour.

Options: EBSCO usage consolidations, Alma analytics or Journal Usage Statistics Portal (JUSP). Pros of Alma: no additional fees; part of existing system; no restrictions for historical records; could modify/enhance reports; could have input in future development. But does involve more work than other systems.

Workflow: harvest data by manual methods; automatic receipt of reports, mostly COUNTER; receipt by email. All go into Alma Analytics, then create reports, analyse, make subscription decisions.

Use the Pareto Principle eg 20% of vendors responsible for 80% of usage. Similarly 80% of project time spent in data gathering creates 20% of business value; 20% of time spent in analysis for 80% of value.

Some vendors slow to respond (asking at renewal time increased their motivation….) Harvesting bugs eg issue with JR1. There were reporting failures (especially in move from http to https) and issues tracking the harvesting. Important to monitor what data is being harvested before basing decisions on it! Alma provides a “Missing data” view but can’t export into Excel to filter so created a similar report on Alma Analytics (which they’re willing to share).

So far have 106 SUSHI, 45 manual COUNTER vendors and 17 non-COUNTER vendors. Got stats from 85% of vendors.

Can see trends in open access usage. Can compare whether users are using recent vs older material – drives decisions around backfiles vs rolling embargos. Can look at usage for titles in package – eg one where only three titles had high usage so just bought those and cancelled package.

All reports in one place. Can be imported into Tableau for display/visualisation: a nice cherry on the top.

Cancelling low-use items / reducing duplication has saved money. Hope more vendors will use SUSHI to increase data available. If doing it again would:

  • use a generic contact email for gathering data
  • use the dashboard earlier in the project

Cost per use trickier to get out – especially with exchange rate issues but also sounds like reports don’t quite match up in Alma.

Alma plus JUSP
Julie Wright, University of Adelaide

Moved from using Alma Analytics to JUSP – to both. Timeline:

  • Manual analysis of COUNTER: very time intensive: 2-3 weeks each time and wanted to do it monthly…
  • UStat better but only SUSHI, specific reports, and no integration with Alma Analytics
  • Alma Analytics better still but still needs monitoring (see above-mentioned https issues)
  • JUSP – only COUNTER/SUSHI, reports easy and good, but can’t make your own
Alma JUST
much work easy
complex analyses available only simple reports
only has 12 months data data back to 2014
benchmarking works with vendors on issues
quality control of data

JUSP also has its own SUSHI server – so can harvest from here into Alma. This causes issues with duplicate data when the publishers don’t match exactly. Eg JUSP shows “BioOne” when there are actually various publishers; or “Wiley” when Alma has “John Wiley and Sons”. Might need to delete all Alma data and use only JUSP data.