Media Contact
There are growing concerns over how some people, including teens and children, rely on AI chatbots for mental health support. OpenAI just announced its implementing changes to ChatGPT after the parents of a teen who died by suicide sued the company. The following Cornell University experts are available for interviews.
Tanzeem Choudhury is a professor of integrated health and technology at Cornell Tech. Her research group creates wearables and multi-modal AI systems to better measure and intervene on health symptoms and behaviors.
Choudhury says:
“The recent reports of individuals experiencing mental health crises and suicide following interactions with chatbots or viral social media content are deeply concerning. At the same time, there is a severe shortage of mental health services both in the U.S. and globally and AI tools are the fall back options for many and the only way to get help. It is also very hard to measure how many people these tools may have helped at the moment of crisis.
“Simply blocking access or providing a static list of crisis hotlines is insufficient for those in immediate need. There is an immediate and urgent need for public health guidelines developed jointly by experts in technology and mental health. These guidelines should focus on interaction design that allows chatbots to shift personas or responses when signs of at-risk behavior are detected, embedded evidence-based approaches to increase help-seeking behaviors directly into AI tools and safe integration of AI into mental health care pathways alongside humans, ensuring AI therapy complements, supports, and scales the reach of human providers.”
Dan Adler, a postdoctoral associate, studies the implementation of data-driven measurement solutions in healthcare, focusing specifically on mental and behavioral healthcare.
Adler says:
"It's extremely unfortunate but not necessarily surprising that AI chatbots are contributing to suicidal ideation and delusions. Depression and anxiety are on the rise, and as the former surgeon general Vivek Murthy stated, we're suffering from a loneliness epidemic. People are turning to AI tools for mental health support, but popular AI technologies like ChatGPT were developed to be engaging and sycophantic, not to support someone in a tough situation. We need ways – either through improving these technologies or regulation – to get individuals experiencing a mental health crisis off of these popular AI platforms and to more evidence-based support. We need to better support young people, parents, and our communities to understand what 'safe AI interactions' look like."
Qian Yang, an assistant professor of information science, is a human-AI interaction designer and researcher and co-directs the Cornell Digital and AI Literacy Initiative.
Yang says:
“Direct-to-consumer AI chatbots can now hold clinical‑level conversations (e.g., about suicide or eating disorders) but are not regulated as medical devices – and that blurred line creates problems. There is a case for treating ChatGPT or Meta’s chatbot as a medical device when they are used for clinical conversations, which could mean stricter regulation and parental supervision for children’s use. More broadly, we should consider designing, evaluating, and regulating these systems based on how they are used rather than on the system or underlying model itself.
“Two useful analogies clarify options: AI chatbots as nutritional supplements and AI chatbots as primary care. Nutritional supplements are consumer well‑being products regulated by the FTC: they face strict rules about health claims and advertising. By contrast, primary care sits between well‑being and clinical care: people seek general advice from primary care clinicians, who are legally responsible for treating common conditions and promptly referring serious cases.
“Today’s gen‑AI chatbots often imply they can act like counselors but do not clearly state what mental‑health issues they cannot address. When clinical concerns arise, their typical response frequently stops at urging users to contact a crisis helpline. So, should society treat AI chatbots more like nutritional supplements or more like primary care? Either way, this raises the need for clearer public education and regulatory discussion.”