It seems that everywhere we turn lately we hear something about the promises and the perils of artificial intelligence (AI). AI is touted as a smart, efficient tool that can speed and streamline processes, analyze and manage complex data, and cut time and costs.
In the healthcare setting, it has the potential to be used to review and gather medical information quickly, screen for risk of disease and suggest diagnoses, provide second opinions, prevent harmful medication interactions, identify treatment options and clinical trials, reduce patient wait times, and much more.
But with those benefits come concerns about privacy breaches, AI generating inaccurate information, being impersonal and, in some cases, even discriminatory.
AI is such a hot topic in healthcare that there was a whole panel devoted to it at the December 2024 American Society of Hematology (ASH) Clinicians in Practice luncheon, where I was proud to present the patient perspective on behalf of The Leukemia & Lymphoma Society (LLS).
If you’re a patient, it’s important to understand how AI is being utilized by your doctors to enhance treatment and care. And if you’re a healthcare professional, it’s essential to recognize and address patients’ perceptions and worries about AI.
Patients’ Top Worries About AI
Despite the potential advantages of AI as a healthcare tool, patients are generally distrustful of it. In a study published in Nature Medicine in 2024, researchers offered identical medical advice to two groups. One group was told the advice came from a human physician; the other group thought it came from an AI-supported chatbot. Those who were told the advice was AI-generated were less likely to deem it reliable and less willing to follow it.
The skepticism is understandable. The very words artificial intelligence summon up futuristic visions of robots doing surgery without any human supervision—which I can assure you is not happening in any operating room I know of! But patients want to know that their doctor is in control, that they are receiving the personalized care they need and being seen as individuals, rather than a collection of data points.
A Yale study published in 2022 revealed four things that patients fear most about the use of AI in healthcare:
- Misdiagnoses
- Privacy breaches
- Less time with the doctor
- Increased costs
The study also showed that racial and ethnic minority groups expressed the greatest concern in these areas. This concern is well founded.
AI models trained on historically biased data could disproportionately create barriers for marginalized groups. When it comes to health insurance, for example, AI and algorithms could increase the rate of claim denials, further cementing bias and disparities. It is critical that the data used to train the AI algorithm is of good quality; sufficiently powered and unbiased. It’s important for healthcare professionals to make sure that they are giving AI tools clear and specific prompts and reviewing the information it generates. At LLS we believe that AI technology should be regularly tested—not just for accuracy, but to ensure that it does not perpetuate historical racism and bias in our healthcare system.
Currently we have no one regulating how AI is used in healthcare. A recent study in JCO Oncology Practice explored the ethical issues of AI, and called upon oncology medical societies and government leaders to collaborate with patients, clinicians, and researchers to develop policies and guidelines to ensure that AI-driven health care remains ethically sound, equitable, and patient-centric. We agree. As a patient-focused organization, LLS advocates for regulations to ensure AI safety and security standards and continues to partner with state and federal legislators and other organizations and agencies, on behalf of the patients and families we serve.
Alleviating Concerns About AI: Two-Way Communication is Key
While the technology isn’t perfect, it’s clear that AI has become a valuable tool in healthcare. It’s here to stay, and as it becomes even more sophisticated it will be integrated more broadly across a variety of healthcare environments—from hospitals to doctors’ offices to outpatient and homecare settings.
So, it’s important that patients and doctors have frank conversations about how AI is being utilized to improve their care.
Here’s my best advice to both patients and healthcare professionals on talking about AI together.
Tips for Patients: Talking to Your Healthcare Team About AI
- Ask healthcare providers how they are integrating AI in their practice. Is it being used for scheduling? To organize your medical records? As a diagnostic or screening tool? Having that information can help you better understand how AI can be used to improve your care.
- If AI tools are used in diagnosis and treatment, talk to your physician about how the AI information is reviewed by humans. What is the process? Who is responsible for interpreting the results and explaining them to you? Who is making the final recommendations about your care?
- Ask how your personal information is being protected. What safeguards are in place to guard against AI data leaks?
- If you’re worried about less time with your doctor, bring that up in your next visit.
- And finally, remember that while AI can gather information to guide diagnosis and treatment, it should not make those decisions for you or your doctor. Patients and their families, in collaboration with the medical team, should have ultimate control over decisions on your plan of care.
Tips for Healthcare Professionals: Talking to Patients about AI
- Explain how AI is being used in your practice. The Yale study I mentioned earlier found that most patients wanted to know if and how AI played a role in their treatment.
- Build trust by discussing the benefits and limitations of AI, just as you would with any other healthcare issue. For example, if you use AI in scheduling, it can save patients time and stress when making appointments. But also acknowledge potential problems like leaks of personal information and explain how you safeguard against them. If you rely on robotic surgery aids in the OR, share the advantages, but also make sure patients understand that human doctors are still leading the procedures, not robots.
- Use AI to improve communication. One of my co-panelists, Christopher Manz, MD, MSHP, Assistant Professor of Medical Oncology, Dana Farber Cancer Institute, shared his researchthat showed how AI tools can prompt doctors to have more meaningful conversations with oncology patients. This helps doctors learn patients’ goals and treatment preferences so that their care is focused on what matters most to them. And, with its ability to quickly summarize patient documentation, he said AI can also be used to improve communication between care teams too.
- Make it clear that while AI might generate information to guide diagnosis and treatment, it does not make decisions about their care.
- Reassure patients that AI is not a substitute for doctor-patient interaction and that care will remain personalized and tailored to each patient. AI may be a helpful tool in healthcare, but only human medical teams can provide patients with the individualized attention they need and deserve.
ABOUT THE AUTHOR
As LLS's Chief Medical Officer (CMO), Gwen Nichols, M.D., plays a critical role in advancing cures through a unique combination of clinical, academic and pharmaceutical experience. She oversees LLS's scientific research portfolio, patient services and policy and advocacy initiatives. Dr. Nichols leads an international team of preeminent leaders in pediatric acute leukemia to conceive, develop and implement LLS PedAL, a first of its kind global master clinical trial and a key component of the Dare to Dream Project, transforming treatment and care for kids with blood cancer.
A physician and scientific researcher, Dr. Nichols has dedicated her career to advancing cures for cancers. Before joining LLS, she was oncology site head of the Roche Translational Clinical Research Center, where she worked to develop new cancer therapies, translating them from the laboratory to clinical trials. Prior to joining Roche in 2007, Dr. Nichols was at Columbia University for more than ten years, where she served as the director of the Hematologic Malignancies Program.
While at Columbia University, Dr. Nichols maintained an active clinical practice and received the prestigious honors of "Physician of the Year" from Columbia University and the "Humanism in Medicine Award" from the American Association of Medical Colleges.