It was a typical Friday evening for Kelly, a breast cancer survivor. She had just finished her grocery shopping when her phone beeped. It was a notification from ChatGPT, a popular AI chatbot built for cancer patients and survivors. Kelly had been using ChatGPT to get her daily dose of tips and advice on managing post-cancer symptoms.
However, this notification was different. It read, "Kelly, your recent breast MRI scan shows some suspicious changes. Please consult your oncologist immediately." Kelly's heart sank. She had done her routine follow-up MRI scan just a few days ago and was waiting for her oncologist to call back with the results. She was not prepared for this.
Kelly's mind wandered off to the times she had used ChatGPT for advice. She wondered whether the chatbot was reliable enough to warn her about something as significant as cancer recurrence. After all, it was just a machine. Could it be trusted?
AI has come a long way in recent years. With more sophisticated algorithms and machine learning capabilities, AI is increasingly being used in healthcare for a range of applications, including cancer diagnosis, treatment planning, and survivorship care. However, with the increasing role of AI in healthcare, there are questions around its trustworthiness, particularly in contexts where patients rely on AI-powered tools to self-manage their health.
The issue of trust in AI is not one that is confined to healthcare. In fact, the World Economic Forum's "Global Risks Report 2021" identified trust in technology as one of the top global risks facing the world. The report states that "the pandemic has accelerated the adoption of digital technologies, presenting new risks alongside technological benefits."
In healthcare, there is evidence to suggest that patients are becoming more accepting of AI in their care. For instance, a 2019 survey of cancer patients in the US found that 85% of respondents were willing to use AI for their care, with 82% stating that they would trust AI advice as much as advice from a healthcare professional.
However, there are concerns about whether AI can truly be trusted. In a study published in the Journal of the American Medical Association, researchers tested four popular symptom-checking apps against 45 clinical vignettes. They found that the apps provided the correct diagnosis first in only 34% of cases, and the correct diagnosis was listed in the top 20 results only 51% of the time.
This highlights the potential dangers of relying solely on AI for healthcare advice. While AI has the potential to improve access to healthcare, reduce costs, and improve outcomes, it is not infallible.
Despite the concerns around the use of AI in healthcare, there are some success stories that demonstrate its potential. For instance, in a study published in Nature, researchers used AI to predict which patients with non-small cell lung cancer would benefit from immunotherapy.
The researchers trained an AI algorithm on a dataset of 237 lung cancer patients who had received immunotherapy. The AI algorithm was able to accurately predict which patients would respond to immunotherapy with an accuracy of 86%, compared to 75% for traditional methods.
This demonstrates the potential of AI to improve cancer care by providing more personalized treatment recommendations.
If you are using AI-powered healthcare tools like ChatGPT, here are some practical tips to keep in mind:
Curated by Team Akash.Mittal.Blog
Share on Twitter Share on LinkedIn