Medical students use AI to practice communication skills
By Patricia Waldron
At Weill Cornell Medical College, students have a new tool for polishing their bedside manner and making a diagnosis: an artificial intelligence-powered virtual patient that simulates the doctor-patient interaction.
The simulator, called MedSimAI, has a text-based chat function and a voice conversation mode that approximates a telehealth visit. It gives students a low-stress setting to practice communicating with empathy and to reason through potential diagnoses. Researchers in the Cornell Ann S. Bowers College of Computing and Information Science are developing the platform in collaboration with medical professionals at Weill Cornell Medicine, Yale University and the University of California, San Francisco.
Traditionally, medical students develop their patient interviewing skills through graded interactions with actors posing as patients in a simulation clinic. These Objective Structured Clinical Examinations (OSCEs) are expensive and time-consuming, however, which limits students’ practice opportunities. Researchers have attempted to simulate this experience digitally, but earlier chatbots did a poor job producing realistic patient responses, and pricey virtual reality-based systems did little to improve accessibility.
“Simulation-based learning is known to be highly effective for training future physicians, nurses, veterinarians and other clinical professionals,” said René Kizilcec, associate professor of information science in Cornell Bowers CIS and lead researcher on MedSimAI. “Building on the latest advances in generative AI, we can offer students unlimited opportunities to practice their clinical communication and reasoning skills with immediate feedback and just the right level of realism.”
The MedSimAI platform uses state-of-the-art large language models to generate a patient’s responses based on a script provided by medical educators. It also has a second AI model that evaluates the student’s performance, using the same rubric that experts use to score OSCEs. The model gives immediate feedback – instead of waiting days or weeks with traditional OSCEs – and even highlights specific comments that showed empathy or questions that lacked key details.
Dr. MacKenzi Nicole Preston, assistant professor of clinical pediatrics at Weill Cornell Medicine and associate director of the Clinical Skills Center, where students practice with robotic mannequins and actors, is working with Kizilcec’s team to test the efficacy of MedSimAI as part of the curriculum. She said feedback from the students has been positive.
“Part of the physician’s job is being able to communicate in a way that helps patients to be comfortable,” she said. But in addition to showing compassion, she said, “it’s essential that physicians learn to ask the right questions and interpret the information they get in a way that brings them closer to the truth.”
Yann Hicke, a doctoral student in the field of computer science, has been building the platform and developing specific cases to help students prepare for their OSCEs. “The platform provides the opportunity for ‘deliberate practice,’ where students can see their strengths and weaknesses and seek out specific cases that let them practice skills they are lacking,” he said.
This year, first-year students took a complete medical history through MedSimAI as part of a full-day simulation treating a patient with rheumatic heart disease, while second-year students used it in their pediatric rotation to practice taking a child’s history from a parent.
“It’s very natural to use,” said Kellen Vu, a first-year student at Weill Cornell Medical College. “It’s voice-based, so that lets the conversation flow smoothly. I think it’s important for practicing your bedside manner, because tone and phrasing matter a lot in real life with patients.”
Vu found it especially helpful that the platform provided a list of possible diagnoses along with key symptoms for each – and noted whether he had asked about those symptoms – which helped him identify gaps in his questioning.
The team is already developing additional cases, as well as a module that simulates speaking with another doctor, so that medical students can practice calling for a consult. They are also working with collaborators at Yale and UCSF to incorporate MedSimAI into their medical school education programs.
Long term, Kizilcec’s team aims to extend the platform to other clinical environments and advance medical education research.
“While nothing replaces practice with human patients,” he said, “tools like MedSimAI are a cost-effective way to augment clinical education and serve as a research platform to find new ways to train future clinicians.”
Patricia Waldron is a writer for the Cornell Ann S. Bowers College of Computing and Information Science.
Media Contact
Get Cornell news delivered right to your inbox.
Subscribe