Preparing the healthcare workforce to deliver the digital future – the good, the bad and
Dr Ehsan Vaghefi
Lots of lessons learned through commercialisation involving AI.
- Great for science – IBM’s Watson Oncology can provide evidence-based treatment options, generate notes and reports, etc: the oncologist then audits this. Enables the hospital to increase capacity as AI is doing heavy lifting. Would it replace radiologists? Some yes; but other jobs have been created to work with the AI.
- linking diseases to different genetic profiles
- predicting possible treatments/vaccines for testing
- AI-assisted cardiac imaging system
- Gift of time – clinicians will have more time to focus on interacting with patients
- (Good reads: “The patient will see you now”
- Ophthalmology/optometry relying heavily on pattern recognition eg AI often more accurate detecting cataracts; can match accuracy detecting glaucoma (which you otherwise don’t know you have until too late); can match accuracy detecting diabetic screening
- Implementation – customer request, design, documentation, customer actual needs often all very different!
- Eg one example where they provided more information to clinicians it slowed them down and made them worse. Clinicians are scared of AI so start double-guessing themselves. They do get faster using it with AI with more practice – but never reach their unassisted screening rates! Similar study in Thailand – when gathering data, clinicians only passed on the good data that they were confident about. So when AI tried to deal with ambiguous situations it didn’t cope.
- Deepmind Health got more than 1million anonymised eye scans with related medical records – then sold themselves to Google. (In 2017 UK ruled that the NHS had broken the law in providing medical records.) Microsoft partnering with Prasad Eye Institute in India. IBM acquiring Merge Healthcare and IBM Watson analysing glaucoma images for deep learning. Streams medical diagnosis app to help you self-manage your health – and provides the results to hospital and your insurance company….. Zorgprisma Publiek “pre-clinical decision-maker” helps “avoid unnecessary hospitalizations” – in practice the hospital can see in advance that you’ll be a costly patient and not admit you.
- Re-identification – based on a single photo you can guess so much about a person you can start to work out who the person is.
- AI bias – racism – based on incomplete datasets. Eg police using AI to assign risk factor based on risk background and face but because it’s got lots of racially biased data, it produces racially-biased risk factors. Eg a health-care algorithm where only 17.7% of patients receiving extra care were black (should have been 46.5%). Vital to be very careful about data collection – who’s contributing and not contributing – and invest more in diversifying the AI field itself.
Is ethical AI an oxymoron? Need to work out data ownership, governance, custodianship, security, impact on future.
Five pillars ethical AI
- Transparency (informed consent etc)
- Justice and faireness (make sure you’re not missing parts of the community)
Is ethical AI a bargain/contract? A bargain struck between data sources and data users. Science needs data so it must be shared – but what benefit does the data source receive? Next evolution of big data in healthcare is “learning health systems” so instead of just holding your information the system can learn about you and give you better treatment.
Is privacy always beneficial? Sometimes sharing the data with an AI lets you get a better treatment plan.
A roadmap: “First do no harm”. Choose the right problem, not going fishing for data and make sure when gathering data the population understands everything about the research