The CAUL Research Repositories Day programme
Open discussion on ORCID by Liz Krznarich
Identifier, registry, set of standard procedures for connecting researchers to affiliations and activities – to simplify reporting and analysis. What’s new?
- collect and connect program to enable two-way syncing between ORCID apps
- API v2.0 – simpler, faster, scalable
- institutional sign-in via eduGAIN – NZ Tuakiri eduGAIN membership in progress
- NZ ORCID Hub to launch this month – will enable inviting researchers to connect/create ORCID; collect and store; add authoritative affiliations – being made open source and will continue to develop the platform. Test site
What’s next?
- Continued focus on automation, interoperability, getting more (trustworthy) data in. 2016 focused on works; 2017 on peer review, api, affiliations; 2018 funding
- online tools to manage membership, api
- new training materials for researchers
- printable record view
- ID widget for personal websites
- 2-factor authentication option
Usage@Deakin by Bernadette Houghton
Tips for using Omeka
- create your own theme plugin, but minimise use of other plugins – stick to those available on omeka.org
- 3rd-party tools – externally hosted (eg Timelines, Tag Clouds) vs locally hosted (eg pdf.js)
ISO 16363 for self-assessment of repositories by Bernadette Houghton
Deakin did a self-assessment using ISO 16363 Another tool would be fine – but review the criteria at the start and recognise its conceptual nature. You’ll want to prefer local knowledge over ISO suggested documentation. And be ready to allocate resources to address identified areas of improvement. [This is perhaps the most challenging part!]
Comparing Apples with Apples: A repository output health check by Julia Hickie
All Australian unis send theses to Trove; most send research outputs; some include cultural or course materials. Ran out of funding last year; 9 months later got new funding but coming back to a large backlog have fresh eyes on the metadata issues.
Identifiers: these are proliferating – being implemented in repositories everywhere, but huge variations in what’s actually getting sent to Trove, eg ORCIDs almost invisible – less than 1% of records include an ORCID. [At Lincoln we’ve avoided putting these in our OAI feed to avoid cluttering up harvesters’ author listings as we haven’t found any best practice for what metadata field to include it in – may need to revisit this.] Grant IDs; DOIs (have been various forms of this recommended at different times/by different people – CrossRef currently recommends https://doi.org/10.1234/asdfb and Trove prefers a url).
Standards: everyone sends some form of Dublin Core. Some NISO Access and License Recommended Practice (2015). Creative Commons licences – helps people find reusable material.
Standards checkup:
- Output a link to your repository in the dc.identifier field
- ORCID in dc.relation (as full url)
- grant identifiers in dc.relation (as full url)
- DOI in either dc.relation or dc.identifier (as full url)
Also check:
- how are you doing open access indicators? eg <free_to_read/>
- creative commons licenses in full url form in dc.rights or ali.license_ref
- rights statements in full url form in dic.rights or ali.license_ref
Considering NTROs in a new repository infrastructure presented by by Robin Burgess (lead investigator Marissa Cassin)
Non-traditional research outputs have been completely separate from the repository – they wanted to fix this. Types include original creative works (mostly visual arts, then musical, then textual, then others); live performances; recorded/rendered; exhibitions/events; research reports for an external body.
Researcher concerns around copyright and time, but also saw value. Keen on metrics (example of Altmetrics widget display in Primo brief record), can show relationships between researchers, provide context for output, etc
Need a clear interface, feed into reporting tool as well as open access repository. Need a flexible metadata schema. Need ability to create metadata-only entries and to upload multiple large files. Also interested in linking to external profiles and pull in various metrics.
Supporting peer review of creative works by Kate Sergeant and Avonne Newton
Creative works as the ‘problem child’ of repositories. Instead of getting the output itself you might get a photo plus a form and photocopy of a programme. This didn’t meet ERA standards for research statements, metadata, or evidence requirements. In 2015 responsibility moved to the library and they could make changes. New process:
- tailored, tiered submission forms
- metadata and evidence entered by researchers and processed by library staff (in Alma)
- flows through to staff activity reports
- NTRO working group assesses
- library updates source data once review completed
This has enabled more timely and consistent reporting, providing a better foundation for ERA submission. Next steps:
- continuous process improvement
- Alma Digital migration – getting appropriate people access to dark archive content
- evolving requirements