Skip to content

Can AI improve access to mental health care? Possibly, Stanford psychologist says

A Stanford psychologist discusses the future of psychiatric artificial intelligence, including the challenges and potential benefits for AI-based mental health assessment.

"Hey Siri, am I depressed?" When I posed this question to my iPhone, Siri's reply was "I can't really say, Jennifer." But someday, software programs like Siri or Alexa may be able to talk to patients about their mental health symptoms to assist human therapists.

To learn more, I spoke with Adam Miner, PsyD, an instructor and co-director of Stanford's Virtual Reality-Immersive Technology Clinic, who is working to improve conversational AI to recognize and respond to health issues.

What do you do as an AI psychologist?

AI psychology isn't a new specialty yet, but I do see it as a growing interdisciplinary need. I work to improve mental health access and quality through safe and effective artificial intelligence. I use methods from social science and computer science to answer questions about AI and vulnerable groups who may benefit or be harmed.

How did you become interested in this field?

During my training as a clinical psychologist, I had patients who waited years to tell anyone about their problems for many different reasons. I think we should look for opportunities to provide care when people are ready and willing to ask for it, even if that is through machines.

I was reading research from different fields like communication and computer science and I was struck by the idea that people may confide intimate feelings to computers and be impacted by how computers respond. I started testing different digital assistants, like Siri, to see how they responded to sensitive health questions. The potential for good outcomes -- as well as bad -- quickly came into focus.

Why is technology needed to assess the mental health of patients?

We have a mental health crisis and existing barriers to care -- like social stigma, cost and treatment access. Technology, specifically AI, has been called on to help. The big hope is that AI-based systems, unlike human clinicians, would never get tired, be available wherever and whenever the patient needs and know more than any human could ever know.

However, we need to avoid inflated expectations. There are real risks around privacy, ineffective care and worsening disparities for vulnerable populations. There's a lot of excitement, but also a gap in knowledge. We don't yet fully understand all the complexities of human-AI interactions.

People may not feel judged when they talk to a machine the same way they do when they talk to a human -- the conversation may feel more private. But it may in fact be more public because information could be shared in unexpected ways or with unintended parties, such as advertisers or insurance companies.

What are you hoping to accomplish with AI?

If successful, AI could help improve access to care in three key ways. First, it could reach people who aren't accessing traditional, clinic-based care for financial, geographic or other reasons like social anxiety. Second, it could help create a 'learning health care system' in which patient data is used to improve evidence-based care and clinician training.

Lastly, I have an ethical duty to practice culturally sensitive care as a licensed clinical psychologist. But a patient might use a word to describe anxiety that I don't know and I might miss the symptom. AI, if designed well, could recognize cultural idioms of distress or speak multiple languages better than I ever will. But AI isn't magic. We'll need to thoughtfully design and train AI to do well with different genders, ethnicities, races and ages to prevent further marginalizing vulnerable groups.

If AI could help with diagnostic assessments, it might allow people to access care who otherwise wouldn't. This may help avoid downstream health emergencies like suicide.

How long until AI is used in the clinic?

I hesitate to give any timeline, as AI can mean so many different things. But a few key challenges need to be addressed before wide deployment, including the privacy issues, the impact of AI-mediated communications on clinician-patient relationships and the inclusion of cultural respect.

The clinician-patient relationship is often overlooked when imagining a future with AI. We know from research that people can feel an emotional connection to health-focused conversational AI. What we don't know is whether this will strengthen or weaken the patient-clinician relationship, which is central to both patient care and a clinician's sense of self. If patients lose trust in mental health providers, it will cause real and lasting harm.

Image by geralt

Popular posts

Category:
Anesthesiology & Pain Management
Could anesthesia-induced dreams wipe away trauma?

Cases of patients who recovered from trauma after dreaming under surgical anesthesia spur Stanford Medicine researchers to investigate dreaming as therapy.