Tag Archives: theta2015

Summary of 26 #theta2015 sessions

So yes, it turns out that I attended 26 sessions at #theta2015. This link is to the tag here on my blog, so in addition to all my live-blogged notes it self-referentially includes this post and any future thoughts arising (I have at least one post planned on altmetrics and oral presentations). For those daunted by the thought of that much reading (including my future self, for when asked what I got out of it), here’s a more scannable summary.

Highlighted are those titles that I particularly want to refer back to for one reason or another, which may bear only passing resemblance to those titles that will be of interest to others.

Day 1:

  1. Waves of the Future: Possibilities for Higher Education: throws out a bunch of exciting/terrifying trends affecting higher education and posits some provocative scenarios for the future (open wins; closed wins; automation wins; creative renaissance). Much to think about.
  2. Changing times, emerging generations: a snapshot of the megatrends affecting higher education: more trends, (Australian) demographics-focused. My notes were brief, just reflecting my own discomfort with this kind of lumping which can neglect vulnerable groups. To which I’d now add that I could see the value of saying “Most people are comfortable with this technology and it’s the new way of the world” if you immediately follow it up with “So how do we support people who aren’t?”
  3. Integrating user support for eResearch services within institutions. Lessons learned from AeRO Stage 2 User Support Project: successfully introduced a maturity model for services to provide user support as realised a completely centralised approach wasn’t workable. I’ve come across the maturity model idea before so great to hear more about it and its advantages here; it seems like something that could be useful in all sorts of contexts both in getting ourselves/other institutions up to scratch, and in supporting researchers and other staff (and students too, why not?) to upskill in all sorts of areas of expertise.
  4. 264 students, eight courses, 792 High Definition video streams, no walls: primarily a ‘look at our awesome technology/learning space’ presentation (re a wet lab that can accommodate 8 simultaneous classes – it is in fact awesome) but also good takeaways about the power of stakeholder engagement and prototyping in a successful project.
  5. Forging productive partnerships between learning, teaching, library, and IT: panel discussion about value of collaboration between these groups. Executive summary: it’s super valuable, let’s all do more of that (but also some challenges).
  6. Where does Campus Learning become Online Learning? Emerging trends in learning space design and usage: panel discussion on developing good learning space from various perspectives (academic and IT definitely in the mix). I noted a linkage to the value of collaboration panel above; also now note the link to maturity models implied by idea that putting slides online, while not actual online teaching, can be a starting point.
  7. A Real-time Step into Space: Reducing complaints about study space by providing monitored “satellite” spaces (with “shushing”) and creating an app linked to gatecount cameras to tell students where they can find free spaces. This spawned a brief Twitter discussion in which @GraemeO28 asked if there was an app to shush students and I suggested (accidentally under the wrong Twitter account) a shushing librarian avatar on a wall screen activated by decibel levels.
  8. Video-conferencing and teaching – From outback Queensland to Ireland and back again: looked at student engagement with lectures using video-conferencing a) class from one campus to another and b) video-conferencing to enable lectures by industry experts. Some good discussion about challenges and benefits (especially with the industry engagement).
  9. Connecting data to actions for improved learning: Scan of much-increased sources of data that can be used for learning analytics to predict and head-off failure/drop-outs. Idea of letting students track own data along with health, cf Fitbit-style wearables. (I’d point out that health-monitoring wearables have fallen prey to unconscious bias: male designers fail to include monitoring of periods; white designers accidentally make the pulse detection fail with dark skin. So we’d need to be careful of things like this.) In questions the ‘creepy’ factor was also discussed.
  10. Innovations in publishing; giving control back to authors: I didn’t write down much detail of this good overview of the trend to open in publishing. Being familiar with that, for me the interesting part is the question raised by the conclusion about how we need to shift the power from the publishers (who still have it, even under open access) to the authors. The question being: how do we do this? given that it requires authors to have knowledge, do they even want it? Sometimes with great power comes great mental fatigue…

Day 2

  1. Learning Sciences & the Impact on Learning Technologies and Learning Activities: This turned out to be the session I’d come for: a great introduction to how we know a whole lot about learning and we should be designing learning tools around good learning practices. People aren’t good at estimating their own competence – but increasingly there are adaptive learning solutions out there that can.
  2. How will digital humanities in the future use cultural data?: primarily an overview of how digital humanities scholars use data now. Suggests talking to researchers about what materials they need, investigate APIs, and provide training.
  3. B(uild)YO skilled Data Librarian: flipped classroom so I was too busy participating to take notes
  4. ‘Let’s be brief(ed)’: Library design, education pedagogy and service delivery: participatory design and built pedagogy in redesigning library space for an architecture library. The library as reference material for architecture students, as well as including varied learning/study spaces.
  5. Evolving customer engagement: Using mobile technology and gamification to improve awareness of and access to library services: used Blogger and Google Forms to make their regular library orientation tour more self-directed and fun. So evolutionary rather than revolutionary. Appreciated that they mentioned the (significant) time it took to do the work at various stages, also the demo with a custom-designed ‘game’ for the session.
  6. Towards a New Library of Resources for Higher Education Learning and Teaching: presentation focused on the choice of a vocabulary (to improve search effectiveness) and work involved in mapping terms.
  7. Curtin Library Rocking the (meta)data: a nice point about the line between data and metadata not being clear. Mostly about a specific digitisation project; interesting take away that this was seen as the best way for librarians to develop data skills ‘on the job’, and that they would need to learn new skills for each new project. So then does that mean we shouldn’t worry about generic upskilling, but just jump in? It certainly implies that we shouldn’t assume learning curve (and the time/training/money needed for that) will be less on second or subsequent projects.
  8. Elements Integration – lets chat about Research Repository and populating Researcher Profiles: unfortunately garnered far more prospective Elements users than current ones, which unbalanced the desired discussion and probably didn’t turn out to be very helpful for anyone.
  9. KISS Goodbye to roadblocks in scholarly infrastructure: a bit about open access, but particularly interesting discussion on the need for persistent identifiers especially in the context of the proliferation of standards. ORCID’s tried to avoid pitfalls but early days…
  10. Reimaging the University Helpdesk for the Next Generation of Digital Research Skills: introduced various support services including software/data carpentry workshops, research tool ‘speed dating’, hacky hour at the bar, Research Bazaar. Idea that everyone works in different ways so need different methods. This is resource intensive so I especially liked the idea of essentially matchmaking researchers who know a tool with those who want to learn it to develop a sustainable research community. In later discussion @kairos001 pointed out this is also hard to sustain in a small environment where there isn’t a critical mass of researchers knowing any basic tools. So maybe we need to collaborate with other local unis/CRIs, or even facilitate bringing in external experts.

Day 3:

  1. From Information to Meta Knowledge: Embracing the Digitally and Computable Open Knowledge Future: state of the nation of research libraries in China which are rapidly changing to support research. Culminated in a shocking mention that these libraries are currently hiring more STEM grads than library grads – seen as easier to teach STEM grads library skills than to teach library grads the needed STEM skills. This was clarified as a temporary situation – ideally want to get library schools to restructure somehow to support needed skills. Still felt to me like the focus on STEM might be at the expense of other important aspects of librarianship and even of research viewed more broadly.
  2. Creating Connections in Complexity: discussions at the intersection of big data & learning: flipped presentation but I got a few notes down on the question of where is the ‘human’ in analytics. Provided me with thinky thoughts about not losing the individual in the pattern, and about not devaluing creativity in favour of empirical/quantifiable analyses.
  3. Design Develop Implement – A team-based approach to learning design: helped people wanting to design new learning objects / programs through a short program of workshops and consultations. I didn’t get a lot out of this session but there may be more of use from their website.
  4. Copyright and compliance when the law can’t keep up: Issues with innovation in online classrooms: a good discussion of navigating a middle way between hyper-compliance and total disregard of copyright law, by focusing on managing risk. Gave a useful checklist of things to think about when making decisions, with examples.
  5. Flexible, Secure and Sharable Storage for Researchers: overview of the storage solution they developed for research data and some of the features they built in. Developed for working data – not intended to provide storage for published data – but some thought put into archival (primarily taking the “long-term storage space is cheap, let’s keep everything” approach and figuring they’ll deal with the long-term costs of this if it gets popular enough).
  6. Better connected education – The future classroom & campus: high-level overview of trends in ICT as affecting higher ed. Not in a style I was able to easily note-take so hopefully there’ll be slides online.

And finally:

Better connected education: future classroom & campus #theta2015

Better connected education – The future classroom & campus
Sue Bryant (Huawei)

Education, its role, and its delivery is changing especially with respect to ICT (as is everything else in the world). It’s starting to become more like a business. [This is my sadface: 🙁 ] Rise in number of foreign students, and increase in offshore branches [especially for Australia]. “Technology is the equaliser” [for those that have the technology].

How we learn is changing: passive learning vs proactive learning.
Learning pyramid from lower to higher retention rates: lecture, reading, av, demo, discussion, practice, teaching others.

Virtual interactive campus – need to think about pre-class preparation; in-class teaching; after-class coaching; extracurricular learning.
Collaborative learning platforms to support interactive classrooms: primary classroom but also remote classroom; learning at home; mobile technology [learning on the bus, in the waiting room, etc]
Envisaging everything cloud-based so teacher can create lesson or preview and send to class before / during. Different teaching aids. Homework / discussion forums post-class. Students can go to portal to see schedule, who’s in class, tools to manage education.

[Slides reference “ICT in education in New Zealand, agenda for the future” but I only find ICT in schools.] China has an “ICT in Education” 10-year plan.

[Vast amount of data on slides here; everyone’s frantically taking photos; I’m assuming the slides will go up somewhere sometime.]

Internet of Things may not be huge at universities, but many looking at smart cities. Hi-def video not currently developing over a network but in future could do so over 5G. Students expect to BYOD and use these so need to accommodate them as we move forward (and work out there are security mechanisms in place!) eBooks and ereaders/”ebook tablets”. Community clouds – virtual private clouds, an ‘ecosystem’ of people. Image / data management – enabling the digital library. SDN (software defined networking) – creating a network that’s application-aware – eg when there’s requirements around latency.

Back to 5G: currently we’re limited to thousands of connections per cellsite; on 5G we’re talking millions. Lagency is 50 times lower. Transfer speed 60 timex quicker.

“[technology] is the pen and paper of our time” – David Warlick

Flexible, Secure and Sharable Storage for Researchers #theta2015

Flexible, Secure and Sharable Storage for Researchers (abstract)
Andrew Nielson and Stephen McGregor

Talked a lot to researchers. Quarter of researchers didn’t know how much storage needed. Few needed more than 10TB. Built http://research-storage.griffith.edu.au/

Found existing services were uni-focused – hard to give access to external collaborators. Need to be competitive with cloud services. Want to let people collaborate with everyone, but not everyone. So there’s a form that lets researchers invite other users to sign in using a uni, Google, or LinkedIn account.

Needed multiple ways to share. Internal sharing – share with people by name. External sharing – provide a web URL with password protection / expiration date.

Device support: web interface plus apps including desktop sync apps.

Project spaces – you get 5GB storage by default but set up a project and storage space is unlimited. Space is a folder / “logical grouping of data”. When creating, have to include metadata for admin purposes (owner, project name, funder, backup contact). Instant approval and provision – don’t want to get in the way. Unless told to delete old / unaccessed data, just move to cheaper storage – effectively archiving off.

Block level deduplication (basically store a reference to previously stored data) better than single-instancing and lower overhead than compression. Have managed to save 46% space this way. This is needed because software stores entire new version, instead of a diff. “Don’t keep backups” but do replicate/sync between their geographically separated datacenters.

Used by Sciences but also Arts/Ed/Law, Business, and Health.
30% of projects (18 researchers) unfunded – data that would otherwise be on hard drives and uni wouldn’t even know it exists.

Developing and piloting more services including storage for use by instruments.
Currently administrators need to be hands-on to setup service – want to automate.

Q: Mandate?
A: If you force it people get annoyed. Providing option.

Q: Funding going forward given that new data probably bigger?
A: Yeah… basically want to build it well, get data off hard drives, show popularity, and then write business case if/when new space needed. Nowhere near this need yet.

Audience comment that fantastic usability for researchers.
A: Getting feedback from researchers has helped this.

Q: Any data publication service in development?
A: Project focused on working storage. eResearch Services department are working on a system for post-publication storage.

Q: Is it accessible to computational services?
A: Another project in early stages working on computational needs. Data in this format isn’t ideal for putting on servers – technically possible but usually when people are doing stuff on a server they want their storage there too.

Copyright vs innovation in online classrooms #theta2015

Copyright and compliance when the law can’t keep up: Issues with innovation in online classrooms (abstract)
Alison Makins

Some parts of copyright law are too narrow – eg “broadcast” in Aus defined as radio and TV and doesn’t cover iTunesU, Tumblr, Vine, etc etc etc. Some parts are too broad. Change in the law is slow! So copyright can be a huge barrier to innovation. However this shouldn’t hold us back.

Universities tend to end up on ends of spectrum:
Hyper-compliance <--------------------> Total disregard

Alison advocates:

  1. taking copyright out of the picture so people don’t have to think about it. Use open access material and just read the licenses which were designed for users, easier to understand. Sells OA to instructors by stressing flexibility. Copyright exceptions use if locked up in LMS, but no good if you want to be portable. Or often easier to create original content and suggest additional readings.
  2. managing the risk – Some questions are clear-cut; some aren’t. Hyper-compliance says don’t do it (depriving students); total disregard says go for it (possible legal risk) – so middle road of managing the risk.

Think about:

  • What’s the likelihood of consequences? Think: identity of rightsholder; nature of use; scope of use; profitability; mitigating steps – only require reasonable analysis. eg a photo taken by restauranteur, used in full, cited and linked to restaurant website, in a MOOC, clear not trying to profit as taken casually and not trying to profit.

    Secure it (lock it down)
    Clip it (crop it)
    Attribute it
    Put endusers on Notice (so students know what they should do about it
    And also provide a way for people to contact you if they want it taken down so they don’t have to resort to suing.

  • What’s the severity of consequences? Consider: nature of work (how much effort put in?); value of work (proprietary information? market?); damage your use will do to value; scope of use; nature of likely consequences (eg takedown notice – but unlikely if already over internet)

Gives power back to users and takes it away from lawyers. 😀

Encourages everyone to do their own risk assessment – not sustainable to have a single copyright officer deciding everything. Try to walk them through the process.

Most creators (except for scholarly publishers) are comfortable having content used in educational settings.

A team-based approach to learning design #theta2015

Design Develop Implement – A team-based approach to learning design (abstract)
Deidre Seeto and Panos Vlachopoulos

DDI work with teams on learning design – collaborative approach to rapid program design development. Sessions followed by consultations, including preparation for after these sessions are concluded.

Sometimes through story-boarding come up with ideas that requires grants or faculty-partnership.

Case-study: Academic interested in flipping classroom. Came up with idea; wanted to explore feasibility, cost. DDI workshop helped a lot. Well-structured.

Case-study: Wanted to explore infolit design. Got to meet with staff and find out what they wanted to deliver to students. Didn’t realise until went through programme how much goes into designing usable modules. Got much that could pass to colleagues too.

Value of using external facilitators in it. Ongoing relationships important. Had to have a readiness interview – some not really ready. All about dialogue and tools fit for purpose. Practice underpinned by theory. People appreciated the space and time to really think and focus. And were very clear on outcomes; check on them later about their action plans.

See also: https://ddiprogram.wordpress.com

Intersection of big data and learning #theta2015 #bright-dark

Creating Connections in Complexity: discussions at the intersection of big data & learning (abstract and bibliography)
Theresa Anderson @uts_mdsi and Simon Buckingham Shum

“Data is explosive, evolving and infinite” – connecting the dots is important but happens at the expense of things that aren’t connected. Ubiquitous technologies often grab the spotlight, but the ‘invisible hand’ of big data and analytics is important. “datapoints in a graph are tiny portholes onto a rich human world” (Buckingham Shum 2015)

Risk of assumptions and values getting baked into data. Tools don’t just provide access to reality but can shape reality. [Yet] “Raw data is both an oxymoron and a bad idea” (Bowker 2015)

[Flipped classroom presentation – here we start playing with picture cards and post-its to brainstorm and discuss:]

  • where is the ‘human’ in analytics?
  • what human/machine partnerships can/should we enable in computationally intensive work?
  • can the analytics of curation help us support creativity and learning?

[My brainstormed image]

From Information to Meta Knowledge in China #theta2015

From Information to Meta Knowledge: Embracing the Digitally and Computable Open Knowledge Future (abstract)
Dr. Xiaolin Zhang, Director, National Science Library, Chinese Academy of Science

In China average distance of a user to a library is 1000km. Main body of students is graduate students. No broad variety of courses – taught what advisors know.
Chinese Academy of Sciences now taking lead in research and innovation, education etc – dividing institutes into four categories: centres for Excellence; for Innovation; for Big Science Facility; for Special [regional] Needs.

National Science Library coordinates institutional libraries. From beginning of digital library development taking an “e-first” approach to push resources to where researchers are. Federated searching, integrated browsing, ChinaCat, ILL, real-time digital reference. Most print subscriptions cancelled. Can’t subscribe to everything for everyone so organising consortia.

Subject librarians embedded in research institutes. Information analysts. Embedded info systems.

Challenge now:
Print-based communication is a mistake borne out of historical practicality. Knowledge is inherently multi-media. Only e-journals are real journals; only smart books are real books. Transition from subscription journals to open access journals.

Research more inter-disciplinary, collaborative, open. Means most researchers are ignorant of most of the stuff they’re working on! Great need for research informatics: have to quickly analyse unfamiliar field. Tech trends: the machine is the new reader.

What’s the place of the library? Embed in R&D processes: environmental scanning, idea and design testing, data management and analysis, etc. Analyse needs of researchers – not just those in lab (need help with search and retrieval) but also primary investigators (help with discovering, exploring, designing) and deans and directors (help with trend-detecting, road-mapping). Variation between kinds of institutes too. Have to work out who needs what.

So repurposing the library: informational productivity; R&D win by analytics; support open innovation. Huge focus on open accessUser-driven digital information systems – knowledge mapping services and research profiling services based on institutional repositories.

Building teams with domain knowledge – resources for data mining, networks of experts, embedded mechanisms. Hiring scientists more than library school graduates. (Library school recruits from undergrads so these students have no STEM background. Traininable over 5 years but need them to work in field now. Suggesting library school change structure to get needed experience in there.) Developing from a collection library to a creation library to a R&D knowledge service provider.

University Helpdesk for Digital Research Skills #theta2015

Reimaging the University Helpdesk for the Next Generation of Digital Research Skills (abstract)
Dr. Steven Manos, David F. Flanders and Dr. Fiona Tweedie

Can’t hope to offer one-to-one support to all the researchers they need to support (especially in the context of the “digital native researcher”) so want to reimagine how they offer support.

Asked researchers what tools they use:
eg python, git, chrome, WebGL, OpenGL, Data-Driven Documents
eg ArcGIS, Google Maps, SPSS
eg Terminal, Matlab, Dropbox, Evernote, iPhone camera
eg Anaconda, R, PsychoPy, iPython, Markdown
Often have enormous of array of tools in their toolbox but still want to add more tools, so how can we hope to help them.

“Community: it’s what makes digital research possible”. Instead of supporting researchers with tools, encourage/facilitate users of these tools to support each other. [Ooh so much potential here.] Build community. Researchers already often learn from each other. All training done by researchers. Research networks tend to be self-sustaining and ongoing.

“A helpdesk is reactive. A training community is proactive.”

Sometimes run into “I have books, leave me alone” and “I don’t computer”. But many excited by being able to flash up a paper by adding a customised map. Workshop on this, very popular, researchers coming back, had 3-4 papers come out.

Software carpentry – teaching coding to non-coders. Teaching them enough coding to be able to make use of Python, R, Matlab in their work (eg a for loop) to make their lives easier without trying to turn them into computer scientists. Taught by researchers for researchers. Intensive, hands-on, many helpers. Every 15min stop talking and they do a challenge to put into practice. Code breaks – important for people to see how this works: you google the error message, the answer is on StackOverflow and you patch it up and continue.

Data carpentry assumes no coding experience. Teaching text mining/analysis for humanities.

How do we get people involved in 3D printing? Throw a grant at them. [Ah to be in an organisation where a few thousand dollars is spare change. 🙂 ]

Research Tool Speed Dating: set up tools on workstations around the room and rotate researchers around the room – if they like it they can set up a second ‘date’ ie training.

HackyHour: come to a bar and people can come, have a drink, ask questions.

Research Bazaar: pulled 19 courses together over a 3-day event.

Different people engage in different ways so having all these methods is really important.

Why would a university want to invest/engage in something like this? [Why wouldn’t it?!] Often IT shops are enterprise-focused, not researcher-focused. Take a user-driven approach.

Asked researchers to cite them if skills help produce articles, and 2 articles have been published citing ResBaz (Research Bazaar). Much social media engagement.

ResBaz going international – Mozilla Science taking over the community. 1st week of Feb next year if you want to do it at your university.


  • open and collaborative platforms
  • some fanatical community engagement
  • cost-effective

Introducing the ResBaz Cookbook (in development)

KISS Goodbye to roadblocks in scholarly infrastructure #theta2015

KISS Goodbye to roadblocks in scholarly infrastructure (abstract)
Martin Fenner, Technical Lead, Public Library of Science (PLOS) @mfenner

“Advanced search” screen vs simple Google-style search vs Wikipedia article about Crick and Watson article which also discusses Franklin controversy. Article itself is on Nature (doi:10.1038/171737a0) and requires a login, payment, or rent. Nature eventually made it [this vital historic article!] freely available for 50th anniversary if you happen to know the right link…

Another model: can get it for free but have to sign up first and insists on knowing your affiliation, job title, etc etc. Cf logins that require only email address, nickname, password. [We really need a secure, universal, federated authentication system. I’m not sure whether or not this is an oxymoron, but we still need it.]

For reuse: often have to say what for, what format, who you’re distributing to, etc and then pay ridiculous amounts of money to the publisher to just show a figure at a conference.

http://xkcd.com/927 [Earlier discussed history of why we have so many plug/socket standards – because window of opportunity to develop standards was around the 1930s and countries weren’t really talking to each other…]

Persistent identifiers. Could argue you don’t need bibliographic info, just persistent id eg DOI, PMID, Bibcode ID. First problem is that there’s more than one. Second problem is that there’s also URLs associated with these. And then, CrossRef DOI display guidelines says always display as permanent URLs in online environment [cf the problem earlier this year when their DOI resolver went down whereas other resolvers kept working, and they said that we shouldn’t rely on a single server/permanent URL]. [Plus and also, many DOIs aren’t as permanent as they were meant to be.]

Different places refer to article with different identifiers – interoperability issues. [Does anyone map DOIs to PMIDs to Bibcodes to…?]

Rise of the stacks: Elsevier; ResearchGate; Digital Science; Academia.edu all trying to merge publishing and social sites for publishers [some coming from one angle some from another]

Cameron Neylon’s principles for open scholarly infrastructures: cover governance (stakeholder governed, transparent), sustainability (‘time-limited funds used only for time-limited activities’ [this is such a good principle!], revenue based on services not data), insurance (open data, open source). ORCID has tried to follow these principles.

Q: Given multiplicity of standards, how do we know ORCID is different.
A: ORCID is too young to say if it’s a success. Much thought went into it but of course always start out with best intentions.

Chat about Research Repository #theta2015

Elements Integration – lets chat about Research Repository and populating Researcher Profiles (abstract)
Leonie Hayes and Anne Harvey

[Facilitated audience discussion of various questions only loosely related. Probably unintended that largely drew an audience of people perhaps more interested in learning about Elements than of people who had already implemented it.]

Discussion of data – Creative Commons licenses not very appropriate to datasets because immediately locks down opportunities for reuse. Creative Commons Zero is better here.

“Sunshine cleaning” – when you hang your data out to dry and everyone sees how dirty it is so you quickly clean it. [Very effective but terrifying for many researchers so I suggest an alternative might be to put the data, like the journal article, out for peer review.]

Looking at impact for Creative Works – altmetrics. Many don’t see themselves as researchers but as practitioners. Uptake of workshops is low as often working from home. The institution needs to focus on areas outside STEM and traditional metrics – these alienate scholars in other fields.

Open Access policy. Many have ideals but doesn’t translate into practice. Especially license issues. Difficulties when managing a PBRF version vs an open access version.