Tag Archives: big data

Preparing the healthcare workforce #FigshareFestNZ

Preparing the healthcare workforce to deliver the digital future – the good, the bad and
the ugly.
Dr Ehsan Vaghefi

Lots of lessons learned through commercialisation involving AI.

The Good

  • Great for science – IBM’s Watson Oncology can provide evidence-based treatment options, generate notes and reports, etc: the oncologist then audits this. Enables the hospital to increase capacity as AI is doing heavy lifting. Would it replace radiologists? Some yes; but other jobs have been created to work with the AI.
    • linking diseases to different genetic profiles
    • predicting possible treatments/vaccines for testing
    • AI-assisted cardiac imaging system
  • Gift of time – clinicians will have more time to focus on interacting with patients
  • (Good reads: “The patient will see you now”
  • Ophthalmology/optometry relying heavily on pattern recognition eg AI often more accurate detecting cataracts; can match accuracy detecting glaucoma (which you otherwise don’t know you have until too late); can match accuracy detecting diabetic screening

The Bad

  • Implementation – customer request, design, documentation, customer actual needs often all very different!
    • Eg one example where they provided more information to clinicians it slowed them down and made them worse. Clinicians are scared of AI so start double-guessing themselves. They do get faster using it with AI with more practice – but never reach their unassisted screening rates! Similar study in Thailand – when gathering data, clinicians only passed on the good data that they were confident about. So when AI tried to deal with ambiguous situations it didn’t cope.

The Ugly

  • Deepmind Health got more than 1million anonymised eye scans with related medical records – then sold themselves to Google. (In 2017 UK ruled that the NHS had broken the law in providing medical records.) Microsoft partnering with Prasad Eye Institute in India. IBM acquiring Merge Healthcare and IBM Watson analysing glaucoma images for deep learning. Streams medical diagnosis app to help you self-manage your health – and provides the results to hospital and your insurance company…..  Zorgprisma Publiek “pre-clinical decision-maker” helps “avoid unnecessary hospitalizations” – in practice the hospital can see in advance that you’ll be a costly patient and not admit you.
  • Re-identification – based on a single photo you can guess so much about a person you can start to work out who the person is.
  • AI bias – racism – based on incomplete datasets. Eg police using AI to assign risk factor based on risk background and face but because it’s got lots of racially biased data, it produces racially-biased risk factors. Eg a health-care algorithm where only 17.7% of patients receiving extra care were black (should have been 46.5%). Vital to be very careful about data collection – who’s contributing and not contributing – and invest more in diversifying the AI field itself.

Is ethical AI an oxymoron? Need to work out data ownership, governance, custodianship, security, impact on future.

Five pillars ethical AI

  • Transparency (informed consent etc)
  • Justice and faireness (make sure you’re not missing parts of the community)
  • Non-maleficence
  • Responsibility
  • Privacy

Is ethical AI a bargain/contract? A bargain struck between data sources and data users. Science needs data so it must be shared – but what benefit does the data source receive? Next evolution of big data in healthcare is “learning health systems” so instead of just holding your information the system can learn about you and give you better treatment.

Is privacy always beneficial? Sometimes sharing the data with an AI lets you get a better treatment plan.

A roadmap: “First do no harm”. Choose the right problem, not going fishing for data and make sure when gathering data the population understands everything about the research

Digital humanities’ use of cultural data #theta2015

How will digital humanities in the future use cultural data?
Ingrid Mason @1n9r1d

[Presentation basically takes the approach of giving an overview of digital humanities and cultural data by throwing lots of examples at us – fascinating but not conducive to notes.]

Cultural data is generated through all research – seems to be more through humanities, but many others too.
RDS building national collection pulling together statistical adata, manuscripts, documents, artefacts, av recordings from an array of unconnected repositories.

New challenge: people wanting access to collections in bulk, not just borrowing a couple of items. Need to look at developing a wholesale interface on top of our existing retail interface.

Close reading vs distant reading. Computation + arrangement + distance. Researchers interested in immersion; in moving images (eg change over time); pattern analysis; opening up the archive (eg @TroveNewsBot). Text mining/linguistic computing methods to look at World Trade Centre first-responder inteviews. Digital paleography – recognising writing of medieval scripts. Linked Jazz.

A dream: when an undergrad would have loved to have been in the Matrix. Have a novel surrounding you and then turn it immediately into a concordance.

Things digital humanities researchers need: Visualisation hours. Digitisation and OCR. Project managers. Multimedia from various institutions. High-performance computing experts.

~”Undigitised data is like dark matter” (Maltby)

What we can do:

  • Talk to researchers about materials they need
  • Learn about APIs
  • Provide training

Q: Indigenous cultural data
A: Some material is very sensitive and challenges to get it to appropriate researchers/communities so could be opportunities to work together.

Q: Any work on standardisation of cultural data?
A: At a high level (collection description) we can but between fields harder.