How Could Artificial Intelligence (AI) Be Used in Psychotherapy & Counseling?

Artificial intelligence refers to the ability of machines to carry out cognitive functions typically performed by humans, including thinking, learning, using language, and problem-solving. The learning part allows the machine to continually refine its processes and output relative to a goal, much like when a human learns to perfect a golf swing or master a new language. 

The machine and the human brain have much in common, which is unsurprising given the long-term, symbiotic relationship between computer science and cognitive science. The amount and type of data available to the system limit both brain and machine, but the machine has the advantage of being much faster and potentially less biased in its reasoning.

People’s attitudes toward AI in general range from fear through cautious acceptance to excitement about AI’s potential. In the interests of transparency, I probably land in the middle. I am excited about the potential of AI in therapy but worry that the necessary guardrails will not be in place in time to avoid harms. A major source of concern is the rapid pace at which the technologies are evolving. Some people with rather impressive credentials have had the following to say about AI:

“It would take off on its own and redesign itself at an ever-increasing rate. Humans, who are limited by slow biological evolution, couldn’t compete and would be superseded. … [AI will be] either the best, or the worst thing, ever to happen to humanity” — Stephen Hawking

“We need to be super careful with AI. Potentially more dangerous than nukes.” — Elon Musk

“I am in the camp that is concerned about superintelligence. [At] first, the machines will do a lot of jobs for us and not be superintelligent. That should be positive if we manage it well. A few decades after that though, the intelligence is strong enough to be a concern. I agree with Elon Musk and some others on this and don’t understand why some people are not concerned.” — Bill Gates

While people can disagree about the potential for harm from AI, there is no arguing that AI is here, advancing rapidly, and poised to influence many aspects of life in ways that are currently unimaginable. The use of AI in psychotherapy is not some proposed future event but is here and now. To maximize usefulness and avoid hars, counselors should keep up-to-date on AI technologies and to the best of their abilities, take a proactive stance regarding ethical issues before they arise.

What Can AI Do in Psychotherapy?

AI participation in psychotherapy can range from carrying out clerical duties to serving as a virtual therapist. 

AI can already automate many of the clerical tasks faced by busy therapists. It can manage intake interviews, documentation, notes, and session transcripts. Apps can identify themes in transcripts and sketch out templates for reports to insurance companies. If done correctly, using AI to manage these routine tasks can free up precious therapist time to do things that are still best left to humans, such as careful evaluation of differential diagnoses and individualized treatment plans.

On the client side, AI can also streamline more clerical tasks. Many therapists assign homework to their clients to help them keep track of progress and challenges between sessions. Automating these tasks can increase compliance and accuracy.

Moving beyond simpler, clerical tasks, AI has shown promise in training future clinicians. Thousands of hours of transcripts would be impossible for a human to digest, but AI can point out what is working well and where a student needs to improve. An AI-enabled avatar can serve as a lifelike virtual client, providing students with interviewing practice.

Various apps, such as Woebot and Wysa, operate on text to carry out several tasks usually performed by therapists. Apps can evaluate the clients’ texts and help them recognize emotions and thought patterns. The apps can coach skills for dealing with anxiety or build on strengths like resilience. These bots can even put a technical, clinical name to what the client is experiencing and provide advice for dealing with challenges. That sounds very much like a therapist!

A further extension of the use of AI in therapy is the AI-enabled robot. Instead of interacting with a screen, the AI is embodied in everything from a baby harp seal (Paro) to a human-like robot. These robots show promise as companions, teachers of social skills to children with autism spectrum disorder, and adjuncts to traditional treatment for many other mental health concerns. 

More controversially, AI-enabled robots are being investigated within the context of human sexuality. While the therapeutic value of sex robots seems promising to some, others have voiced ethical concerns requiring further research. For example, we do not know if absolute control over a robot would generalize to interactions with other people, or whether the use of robots would enhance or decrease a client’s interest in pornography or propensity to commit sex crimes.

The Pros of Using AI in Therapy

Many of the existing AI tools for psychotherapy emphasize the ability to reach underserved populations. Many obstacles face people who need psychotherapy. It’s often expensive and poorly covered by health insurance if the person has health insurance at all. 

Psychotherapists are not exactly a diverse population, and individuals from marginalized groups might feel uncomfortable with a therapist who is not from their own community. People might live in rural areas or areas not served well by psychotherapists. 

Finally, even people with access to psychotherapists might prefer an AI solution due to privacy concerns. Many situations that bring a person to a psychotherapist are potentially embarrassing and involve many embarrassing questions. Some might prefer sharing sensitive information with a machine instead of even the kindest, most empathic stranger. A machine often seems less judgmental. Would you rather have your bathroom scale or your physician talk to you about your holiday weight gain?

Other advantages to the AI therapist are quite practical. Your AI therapist is available around the clock. You don’t need an appointment. The AI therapist never runs out of patience or forgets what you said in the last session. You won’t feel rushed out of the office to make room for the next person in line. The client can work at their own pace. Therapists are highly educated and might forget that not all clients can absorb information at the same rate.

The Cons of Using AI in Therapy

We see few, if any concerns about the use of AI in clerical tasks by either client or therapist, except that all generated content must be reviewed for accuracy. If you have experimented with AI tools such as ChatGPT, you know that review by a set of human eyes is essential for effective use. The content generated by the AI is only as good as the data on which it is based, and the learning capabilities of today’s AI mean that it can pick up misinformation easily. ChatGPT is notorious for inventing fake citations, which then become part of the “learned” universe. It is likely that AI will continue to be refined to limit some of these mistakes, but in the meantime, too much dependence on AI tools for clerical duties is risky.

Clinicians considering adding AI to their practices should be mindful of the possible harms that could result if the AI malfunctions. We all have enough experience with glitches in our phones and computers to know that these systems are not foolproof, and AI therapists are going to glitch, too. Having plans for these eventualities and frank conversations with clients about their strengths and weaknesses could soften the blow should something go wrong.

AI of all kinds comes with a built-in problem of confidentiality. People share their innermost secrets with psychotherapists. How does the AI system manage this sensitive data? How secure is the storage? What are the rules for data use? How resistant are the AI systems to hacking and manipulation? 

The collection of auditory and visual data can be especially problematic from a cybersecurity point of view. Yet, the systems and their use always seem to develop ahead of these necessary guardrails. Clients should be fully informed of any privacy and security concerns before they engage with any AI system. But we all know how often we read those terms of service before clicking through!

Because AI is dependent on the content to which it has access, any biases existing in that content will manifest in the AI’s responses. This raises the possibility of racist, sexist, ageist, and other types of biased responses finding their way into the conversation. Most clients are probably not familiar with the meaning of “algorithm” and would be deeply troubled by a negative experience of this type.

Therapists are highly trained to be sensitive to the limits and boundaries of a good therapeutic alliance. Of all the social interactions we manage, this might be classified among some of the tougher situations to navigate. A therapist wants to be warm, open, and authentic while remaining a professional, not a friend. Can current AI manage this type of complex human interaction? Will the client become overly attached to the AI? 

Current training programs for psychotherapists are also currently behind with AI. Some programs recognize this gap and are scrambling to fill it. Still, the majority seem to be conducting business as usual, as if the AI was something out of a science fiction film rather than today’s reality. Professional organizations like the American Psychological Association are beginning the conversation, but more of this type of high-level guidance for practitioners is sorely needed. 

Therapists would also benefit from guidance and training that recognizes that their clients may already use AI tools. This is just a further extension of the sudden explosion of medical advice online that caught medical practitioners somewhat by surprise. How does a practitioner best handle these situations?

Finally, it should not be surprising that healthcare managers often see AI as a way to reduce costs. Although AI can help underserved individuals as a substitute for a real human therapist, ethicists do not want to see a broad trend in which human therapists are switched out routinely for apps. Hybrid situations, where a human therapist uses AI tools, might present the best overall approach.

Going Forward

Whether a psychotherapist plans to incorporate AI into a practice or not, keeping up with the technologies is a must. Incoming clients might have already used AI tools, so without understanding these tools, the therapist is at a disadvantage. AI is moving at a remarkable pace, so educating ourselves now will help us deal with future developments.

A related issue will be clients’ reactions to AI’s impact on themselves and their communities. AI will replace jobs, and it will challenge any notion of privacy or confidentiality. Clients might become frightened or upset by developments they do not fully understand.

Counselors should remain mindful of the ethical challenges raised by AI and be prepared to advocate for their clients’ best interests. Often, we find that the people developing technologies share a naïve assumption that “somebody else” will handle the ethical constraints on whatever they invent. That “somebody else” is us. We can’t abdicate that responsibility. Therapists must push for transparency regarding privacy, data security, and other ethical concerns as new AI entities are presented.

Technological breakthroughs, whether we’re talking about the steam engine or the Wright brothers’ flying machine, are always a little scary at first. There are setbacks and accidents. AI is likely to be no different, but where it is different is the pace at which AI is poised to change our everyday lives. Our best defense against unwanted outcomes is to pay attention, stay educated, and fearlessly advocate for our clients’ best interests.

Laura Freberg, PhD

Laura Freberg, PhD

Writer & Contributing Expert

Laura Freberg serves as professor of psychology at Cal Poly, San Luis Obispo, where she teaches introductory psychology and behavioral neuroscience.

Dr. Freberg is the author or co-author of several textbooks, including Discovering Psychology: The Science of Mind, Discovering Behavioral Neuroscience, Applied Behavioral Neuroscience, and Research Methods in Psychological Science. She served as President of the Western Psychological Association (WPA) in 2018-2019.