Once upon a time, in a hospital not too far away, a machine learning algorithm was being used to diagnose patients with a rare form of cancer. The algorithm was supposed to be super accurate, but one day it gave a false negative diagnosis to a patient who actually had the cancer. The patient didn't get treatment in time and ended up dying.
This isn't a fairy tale, unfortunately. It's a real-life example of what can go wrong when AI is integrated into healthcare without proper oversight and evaluation. The research team behind this incident, led by Dr. Jessica Mega, recently published a paper in the Annals of Internal Medicine, calling attention to the messy truth about AI in medicine and providing guidance for how hospitals can clean it up.
Concrete Examples
- An algorithm used by a hospital in Boston frequently recommended opioid prescriptions to patients who didn't need them, leading to overprescription and addiction.
- A machine learning model used to predict which patients were at risk of readmission to a hospital after being discharged was racially biased, resulting in higher numbers of false positives for Black patients.
- A chatbot designed to diagnose mental health disorders based on patients' symptoms was found to be inaccurate and potentially harmful, as it often missed serious conditions or gave unhelpful advice.
How Hospitals Can Fix It
- Provide more transparency and accountability for AI systems by creating clear documentation and reporting processes for how they were developed, tested, and validated.
- Prioritize diversity and equity in the development and implementation of AI tools, by involving diverse stakeholders and ensuring that bias and discrimination are monitored and addressed.
- Collaborate across institutions and disciplines to share best practices and data, and to establish guidelines and standards for ethical and responsible AI in healthcare.
Conclusion
The potential benefits of AI in healthcare are enormous, but so are the risks if these technologies are not used wisely. Hospitals need to take a more proactive and collaborative approach to ensuring that AI is trustworthy, ethical, and effective in improving patient outcomes.
Akash Mittal Tech Article
Share on Twitter Share on LinkedIn