Category Archives: Uncategorized

Aligning project milestones to development schedules #anzreg2019

Moving the goal posts: aligning project milestones to development schedules
Kendall Kousek, Macquarie University

Macquarie – 45,000 students, 2000 staff. New purpose-built library opened 2011. Alma, Primo, Leganto, CampusM

Multiphase pilot to introduce Leganto from 2017 – demo to Faculty of Arts, tested with 8 volunteer unit convenors. Next another 11; next widened to other faculties, and so forth. Now have 400 reading lists in all 5 faculties.

When NERS opened made suggestions eg:

  • links to free resources – instructors expect library to embed direct link. In Leganto it goes to link resolver which ends up going to home page by default. Library were expecting to fix this, but instead instructors were trying to fix it themselves – by removing data from the citation until only the manual link would work! Badly affected enthusiasm in one faculty in particular. Requested ability to hide broken links – this resonated and was picked up.
  • duplication – previous system let you rollover copyright info; Leganto deleted all this so everything had to be re-entered and rechecked. Requested duplication would duplicate copyright data – was picked up and implemented even better than expected in some ways – but not as expected in others. Librarian rollover options different from instructor rollover options – but issue was reported and resolved.
  • on rollover instructors kept on course but not on reading list. Requested a fix, planned for this November
  • course loader for rollover remains fairly manual, automation probably not possible

Roadblock – lots of workload required in getting LTI link into Moodle. Created a custom LTI block designed by library and created by learning team. Can be added by instructor per library’s “instructor’s guide”.

Concern students might miss the reading list link in the LMS and still try searching in Primo. So used Resource Recommender in Primo – this isn’t sustainable so plan to phase it out as students get used to accessing readings via LMS.

Happy with system and fixes / improvements to it. Now able to focus on increasing usage and rolling it out further across campus.

Understanding user behaviour and motivations #anzreg2019

Understanding user behaviour and motivations when requesting resources
Jessie Donaghey, Bond University

Small research project focused on resource sharing requests.

Libraries often look at usability of getting to full-text – but we also need to make sure the process is seamless when we don’t have the full-text. At Bond have made improvements but haven’t stopped to investigate user behaviour during this.

Goal to “simplify and promote mechanisms for staff and students to request resources that they can’t find in Library Search”. Wanted to assess the service using analytics data; and understand users with a survey.

Assessing the service

Until 2015 – fairly manual system with users filling out online form manually and by the end of the process data had to be entered into four systems. No integrations, prone to typos, hard for users to even know it existed.

Up to 2018 – form automatically populated by Alma. Users could track progress through their library account. Enabled silent login. But still two extra steps to find service – including ticking on “Expand my results” which users never thought of.

End of 2018 considered “expand my results” as default. Until then it was used in 1% of sessions. Nervous about flooding results with “no full-text”. Took a sample of searches, replicated them, and found only some would have a small number of “no full-text” results, so turned it on. Between 2018 – 2019:

  • 60% increase in requests supplied
  • 94% increase in unique requesters
  • 85% increase in first time requesters – especially increase in undergrads

Small increases after going live with Alma Resource Sharing and after enabled silent login for Primo, but now a very large increase after enabling “expand my results” by default.

Understanding users

Surveyed users who’d recently received something from survey. Higher response rate from postgrads than undergrads.

Users mostly either had it recommended by library staff (especially regular users), or found link in Primo (especially those using it for the first time).

Users mostly expected article requests would take around three days (including new requesters who actually leaned towards expecting a longer delivery time). This matched with supply time reality. (May need to advertise this more so as not to put off people expecting it to take longer.)

Were users placing it for items they didn’t really need? Mostly (51%)  users needed the specific resource; 33% it complemented the resource they’d already found. (New users and undergrads had a more even split between these two.)

Did they track progress? 24% yes, 40% didn’t know they could. Regular users more likely to know, but still often chose not to – perhaps more familiar with wait time so less perceived need.

Most important perceived features were ease of placing request, then ability to place multiple requests at once. Least important was auto SMS updates.

94% extremely likely to use it again – mostly because they have to eg specialised resources; and/or impressed by efficient service.

Primo and Alma analytics reports used for this presentation are documented at http://tiny.cc/JD-ANZREG-19

Q: Any complaints about having ticked the “expand my results” box?
A: When replicating searches, especially replicated those faceted to articles/peer-reviewed to be sure. But no complaints. Maybe they just clicked the button.

Q: Any concerns about increase of usage, and being able to maintain fast turnaround?
A: Resource sharing is available to all students. Significant increase in usage and document delivery team had highest usage ever in September, still managed to maintain high turnaround. May not be sustainable (especially budget-wise) – decision for managers.

Predicting Student Success with Leganto #anzreg2019

Predicting Student Success with Leganto: a “Proof of Concept” machine learning project
Linda Sheedy, Curtin University

Early adoptors of Leganto as a reading list solution in 2015 – mainstreamed in 2017. Now 4700+ reading lists with 115,300 citations viewed 1.5million times by 42,000 students.

Ex Libris proposed a proof of concept project “to use machine learning to investigate the correlation between student success and activity with the Leganto Reading List”. Curtin had already been active using learning analytics so thought it would be a good fit.

Business need – early prediction (within 1-6 weeks) of students who’ll most likely struggle with their course.

Data:

  • student profile, grade and academic status data from Curtin – took significant time and effort to produce this, and inter-department work. Course structure and demographics are complicated.
  • Leganto usage from Ex Libris

Lots of work also combining the datasets.

Function: Ex Libris considered a number of possible algorithms – currently seems to be settling on the Random Forest algorithm but the final outcome may be a two-stage model.

So far Semester 2 2016 – Semester 2 2018. So far the algorithm has found the following features are most predictive:

  • student historical average grades
  • historical usage engineered feature
  • weighted student usage per course

  • student age
  • student usage in week 1 in relation to class

Model total accuracy is 91.9%
Recall: it catches 18.8% of students at risk
Precision: 69.44% (ie for 10 students predicted at risk, 7 actually will be) – considered high

The model clearly needs more work – but increasing recall shouldn’t be at expense of precision. More data may help along with more tweaking of algorithm.

Project has concluded; not sure where Ex Libris will take the project next or whether it’ll become a Leganto offering.

Q: What intervention did you take if any?
A: Just a closed project, all anonymised – just to see if it’d work – so no intervention during this project.

Q: Was demographic data other than age included?
A: The algorithm found itself that age was a major predictor (other demographic data was included but algorithm didn’t find it to be predictive of success).

Q: How was analysis improved?
A: At start of project hoped to prove that students would succeed if they read more. But as it went on it shifted to seeing what predicted when students would struggle.

Automation and integration with Agile and continuous development #anzreg2018

Automation and integration
Peter Brotherton, SLNSW

Agile

  • Idea: requirements and solutions evolve, not defined upfront – continual improvement process including of communication. Early and continuous delivery, welcoming changing requirements, communication and reflection with a view to tuning and adjusting. Working software is the primary measure of progress.
  • Challenges: risk-averse culture; documentation-heavy project management framework; hard to change mindsets. When they first tried to do agile they just ended up doing waterfall over and over and over again. Agile training workshops were helpful.

CI/CD: Continuous Integration/Continuous Delivery/Deployment

  • Continuous Integration – merging feature branches back into main branch frequently – requires test automation to ensure quality of your unit tests and integration testing as well.
  • Continuous Delivery – automated release process
  • Continuous Deploment – automated deployment to production
  • Unit-testing – testing units of source code – function, class, method
  • System-testing – testing integrated system, often through user interface
  • Docker is a light weight containerisation technology, helps standardise application dependencies across environment so helps make dev setup and deployment easy.
  • Fewer bugs into production and less time manually testing despite releasing more frequently so being more responsive.
  • Use Bamboo, also considering Jenkins

Eg Alma acceptance tests

  • Can’t write unit tests as don’t have source code, and can’t control when releases happen. But can do browser-based system tests.
  • Audited critical business processes in each area of the library. Documented step by step into Excel, and started manual testing on Sandbox release – super tedious. Now working on automating acceptance tests using Python Robot Framework (uses either DOM or xpath, possibly also coordinates), which is working well. (This auditing/documentation also highlighted efficiencies they could make in regular business processes.)
  • Change in UI did break the script once. Change in data hasn’t yet.

Analysing logs #anzreg2018

How to work with EZproxy logs in Splunk. Why; how; who
Linda Farrall, Monash University

Monash uses EZproxy for all access either on/off campus. Manage EZproxy themselves. Use logs for resource statistics and preventing unauthorised access. Splunk is a log-ingestion tool – could use anything.

Notes can’t rely just on country changes though this is important as people use VPNs a lot. Eg people in China especially appear elsewhere; and people often use US VPN to watch Netflix and then forget to turn it off. Similarly total downloads isn’t very important as illegal downloads often happen a bit by bit.

Number of events by sessionid can be an indicator; as can number of sessions per user. And then there’s suspicious referrers eg SciHub! But some users do a search on SciHub because it’s more user-friendly and then come to get the article legally through their EZproxy.

https://github.com/prbutler/EZProxy_IP_Blacklist – doesn’t use this directly as doesn’t want to encourage them to just move to another IP.

A report of users who seem to be testing accounts with different databases.

Splunk can send alerts based on queries. Also is doing work with machine learning so could theoretically identify ‘normal’ behaviour and alert for abnormal behaviour.

But currently Monash does no automated blocking – investigates anything that looks unusual first.

 

Working with Tableau, Alma, Primo and Leganto
Sabrina Alvaro UNSW Megan Lee Monash University

Tableau server: self-hosted or Tableau-hosted (these two give you more security options to make reports private), and public (free) version.

Tableau desktop: similarly enterprise vs public.

UNSW using self-hosted server and enterprise desktop, with 9 dashboards (or ‘projects’)

For Alma/Primo can’t use Ex Libris web data connector so extract Analytics data manually but it may be a server version issue.

Easy interface to create report and then share with link or embed code.

UNSW  still learning. Want to join sources together, identify correlations, capture user stories.

Integration with the Alma Course API #anzreg2018

The Alma Course API – An Exercise in Course Integration
David Lewis

Alma Course Loader was inflexible – only runnable once a day, and doesn’t let you recover from errors. So wanted to write their own. Migrated to Alma when SOAP was available; later had to rewrite for REST API.  With the advent of Leganto the integration has become of even more importance.

Importance of API quotas and minimising frequency of calls. (Especially as the same API gateway is used by all Alma customers!) Course field mappings also important at the start. Another difficulty was course collapsing and parent-child course relationships (eg different cohorts within one course) which was important at their uni and was the hardest part to figure out. Ended up using course code for normal courses and parent course code for collapsed courses.

Discovered that even when they asked for JSON, error messages would come back as XML and crash their system – so ended up just writing their program to use XML instead of JSON.

Logging is a good debugging tool and audit trail and useful when raising jobs with Ex Libris.

Senior management often doesn’t value library contribution to course management – this is often political and requires a lot of awareness-raising among lecturers etc to get them to talk up the library to project managers.

Digital Strategy and Skills Development – A Balancing Act #anzreg2018

Digital Strategy and Skills Development – A Balancing Act
Masud Khokhar

“A short history of an ambitious team who curbed their enthusiasm for the larger good” / “of an ambitious team who told their evil overlord to shh and calm down”

Team works to enhance reach/impact/potential of digital and research – partnering with researchers which can lead to moments of optimism.

Key drivers – rapid tech changes, impact of machine learning, growth of digital scholarships, need for evidence-driven decision making, lack of general purpose digital skills and way of thinking among non-tech staff. At Lancaster added ‘digitally innovative’ to its strategy; have a digital vision for university (digital research / digital teaching and learning / digital engagement).

So library needed to be digitally innovative, digitally fluent; diversity of thinking as core principle – formed innovation group to actively seek partnerships, build confidence, develop leadership, inspire creativity. Wanted to get insight into customer behaviour to develop data-driven services.

Most ideas actually turned out to be non-digital in nature – some required digital work, more required cultural change!

Ideas/projects

  • A Primo learning wizard for first-time users (but most people don’t log in so issues with them seeing it again and again).
  • Research data shared service – repository, preservation, reporting – collaboration with 15 institutions. Looking at a framework agreement/interoperability standard so variety of vendors can be on board – no matter what repository you use, it talks to a messaging layer which connects to aggregators, preservation services, reporting and analytics, institutional or external services.
  • Data Management Administration Online (sister to DMPonline) – about to be launched as a service – gives a birds eye view of all RDM/open science services at your institution. Can set KPIs, benchmark against similar institutions – has multiple views (DVC / librarian / data manager / IT manager etc). API driven including Tableau connector. Based on Jisc Research Data shared services and on messaging layer.
  • Mint – doi minting tool (open source to work with PURE)
  • Library digitisation service / copyright compliance for content in Moodle. Reports on downloads and usage
  • Leganto implementation (migrated from Talis). Developed some Moodle integration: https://moodle.org/plugins/mod_leganto
  • Noise reporting – part of indoor mapping system – users can select where they are and give comments on noisiness – system provides heatmaps and helps detect common patterns. Can extend this for fault reporting, safety reporting.
  • Labs environment for quick-and-dirty eg library opening hours; research connections (extracting data from PURE, Scopus, SciVal, and twitter APIs; preservation of research data – extracting from Pure into Archivematica (not in prod but possible); research data metadata (rdf based on Pure data); research outputs announcements (generated from Pure metadata for Twitter announcements; again not in prod but possible).

But when focused on learning machine learning etc and all the exciting stuff, it’s at the expense of real needs. So for snazzy stuff did learn and adopt Amazon infrastructure and a local caching infrastructure for Alma data, some IoT infrastructure (beacon based, sensor based eg noise and temperature, thermal imaging for people counting), natural language touch points eg messenger/Slack bots.

Have decided that every process will be reviewed with digital as part of it. Introducing more Excel skills with training; Alma analytics training; analytical thinking in general. Trying to embed digital team in all library processes

Looking at the Rapid Improvement Exercises model

Wrangling Primo Resource Recommender #anzreg2018

Resource wrangling : An implementation of Primo Resource Recommender service at State Library Victoria
Marcus Ferguson, SLV

Principles: find best way to present recommendations; control number of resources recommended; clearly identify subscription vs free. Include: databases, websites, research guides, custom types (collection pages, exhibition pages), ‘more to explore’ (originally things like library hours, now repurposed libguides for subjects) 488 resources – with 10,500+ tags. Maintaining this either through Back Office or spreadsheet upload was going to be difficult.

Built a spreadsheet with columns: list of keywords; database1..database5; website1..website3 etc; with dropdown menus to populate these. But then need to convert this. VLOOKUP wouldn’t work so needed custom function. Found a VBA function via Google. This operates on a new sheet to create a list of databases and all the tags used by it, plus a list of ‘other tags’ added manually for each one. Final sheet pulls it all together into the format Primo expects.

Finally also assigned icons to improve visual effect – found from vendor branding pages; website; or social media. Looks bad if a resource has none, so assign a default logo in that case.

Subscription database use ‘Login to access’ as URL text; free ones have title as URL text.

Added rr_database_icon_check as a keyword so can search for Primo for all of these and check that they’re still valid – mostly they’re pretty stable. If that changes, will grab them and store locally.

Final step is VBA macro to save export version and backup.

Looking forward: – need to assess impact of the May release “tag enrichment”; extend spreadsheet to include research guides; apply additional error checking; investigate ways to allow other librarians to work with the tags while managing change control.

Automating systematic reviews #anzreg2018

Automating systematic reviews with library systems: Are Primo and Alma APIs a pain reliever?
Peta Hopkins, Bond University

Systematic (literature) reviews especially in medical field – one example retrieved 40,000+ abstracts, screened to 1,356 full-text, and included 207 in the final review.

Were asked for process to reduce time involved down to two weeks. Developing toolset of elements to automate processes. Esp find/download full-text from subscriptions, batch-request from interloans.

  • Primo APIs to find/download? Not really (because actually it’s the Alma uResolver and even that can’t pull full-text).
  • Alma APIs to submit interloan requests? This has worked well – 95% success rate.

Old system searched Primo, clicked interloan link, tick copyright boxes, submit
Now upload Endnote file into system, click link to submit requests to Library, tick copyright boxes, submit (in bulk)

Dev wanted better documentation on APIs (eg encoding format); more helpful error messages; and in future want a way to find full-text and download.

Repositories at https://github.com/CREBP

Leganto implementations #anzreg2018

eReserve, Alma-D and Leganto: Working together
Anna Clatworthy, RMIT

Project to move all 14,000 Equella e-reserve items to Alma Digital in a format to suit Alma/Leganto copyright and digitisation workflows

All course readings at RMIT are digital; eReserve team in library accepts requests, scans items, uploads, sends a link back to use in CMS. Helps withh copyright compliance. Mostly book extracts, some journal articles, Harvard Business Review

Lots of questions to consider: MARC or DC; multiple representation or single record; how to deal with CAL survey in middle of migration; how do records look in Primo and in Leganto (which they didn’t yet have live); what is copyright workflow and how to manage compliance?

DC records weren’t publishing correctly so migrated to MARC. (This may have been fixed now). Multiple portion representations on a single bib record – migration process quicker, chapter/portion info in 505_2$a. Custom 940 field with copyright info

Extracted parents as spreadsheet, extracted children as spreadsheet, script combined the two — then instead imported records from Libraries Australia with a norm rule for extra fields (505, 542 for extract and copyright info; 9XX for CAL information); trained non-library folk to use MDE and run norm rules.

eReserve in Alma has no custom fields. Creates confusion for non-eReserve staff (thinking they own the book so no need to buy it though in fact only have 11pages of ch.4 – looks like a book in Primo too!)
* DC doesn’t work in Analytics – only see title
* Determined best practices and process for migration; set up Alma-D collections config and display in Primo; created MARC RDA cataloguing template and training; Leganto training and pilot; configure Alma reading lists, copyright, Leganto set up, and more…..
* Would like enhancements:
– automatic fills in copyright workflow – only working for some fields
– search function in reading list view
– MARC deposit form
– digital viewer link – Share link doesn’t work, leads to ‘no permission’ page. (Users need to sign-in first but of course they don’t.)
* With Leganto, show-and-tells seem to be getting interest, as is word of mouth. Not actually live yet though due to IT delays.

Leganto at Macquarie University: impressions, adjustments and improvements
Kendall Kousek, Macquarie University

Macquarie had Equella for copyright collection. Teachers email list to library and list made searchable in Primo by unit code (via daily pipe). Move to Leganto to address some issues. Can search library for items or upload your own pdfs, images, etc.

Pilot with faculty of Arts to create reading lists for 9 courses. Next semester another 11; 1 person had done it before and confident enough to try their own. Next semester 3 departments; not many came to session but a few still created own reading list; total of 120 reading lists created.

Feedback – added survey as a citation to reading lists – not many respondents as end of semester. Later survey added to Moodle directly to capture those not using the reading list and finding out why. Teachers liked how they could track how many used links and when (eg hour before class); ability to tag readings (eg literature review, assignment, extra); students like navigability and ability to suggest readings to teacher. Student satisfaction very high: clear layout, saved time chasing readings and can track reading in the week. Library staff liked layout, ease of learning/adding PCI records; Cite It! bookmarket.

Improvements people wanted was better integration with Moodle (lots of clicks to get to article); found it slow to load; students getting confused about whether discussions should be in Moodle or Leganto. Edge broke something so told students to use other browser. Want a ‘collapse all’ button for previous weeks to get straight to today’s: ExLibris are releasing this soon. Library staff want subsections functionality (ExL not going to do this, so using ‘notes’ feature instead.)

Adjustments needed by
* students – easier to find readings in Primo – but not all are there (esp articles, chapter scans), Leganto is source of truth. So have created Resource Recommender record to link to Leganto.
* teachers – want them to create their own reading list instead of submitting it by email (or at least to include layout information in those emails). And get them to use more variety of resources.
* library staff – more collaboration, reading lists are never complete until end of semester so have to be on top of it.

Improvements
* teacher finding more engagement as students aware they can see usage! Another planning to be more ‘playful’ with reading lists. Appearance of Leganto makes students more aware of resources as resources instead of just a list. Feeling will plan their teaching through Leganto. One teacher saying “These are the questions for the week, what are teh resources you’re using to answer them?”
* students can track which readings they’ve completed, can build own collection, can export in preferred referencing style.
* library staff have communication with teachers in Leganto; inclusion of all resource types (including web links using citation bookmarklet). Using public notes (eg trigger warnings)

4th stage of pilot will involve new departments, more volunteers by word of mouth. Need better communication/training eg presentations at dept meetings.

OER not currently dealt with – functionality maybe to come – can add CC license within a reading list but then depends on how widely you share that reading list!