Can AI replace therapists?

Can AI replace human therapists?

I have used AI to help me with this vlog so let’s get into it. Scores of people have searched for an answer to a mental health question on the internet. Now, very likely more than ever, with the onset of AI chatbots. Did you even know that AI chatbots are different from AI agents? AI is telling me that:

AI chatbots are conversational tools designed to simulate human-like dialogue, often using rule-based or scripted responses. They are widely used in customer service portals, websites, and messaging apps to handle simple Q&A or basic interactions. While effective for quick responses, they have low context awareness, minimal personalization, and limited learning ability, making them less suitable for complex or evolving tasks. I actually just got the gears from AI chatbots for my cell phone service and never did get the help I needed until I physically went into the store and had a human being help me. Suffice it to say, I was not impressed with their AI chatbots.

AI agents, on the other hand, are autonomous, goal-driven systems capable of reasoning, adapting, and learning over time. They can handle multi-step tasks, maintain high context awareness, and offer personalized experiences based on user behavior. For example, Microsoft Copilot acts as an AI agent that not only answers questions but also plans, creates, and executes tasks, bridging the gap between passive assistance and proactive support. Wow. Now let’s get back to mental health therapy.

What is an AI “therapist”? Psychology today tells us that:

Many chatbot models have the capacity to follow the conversational style of therapy and are built with psychological frameworks and treatment guidelines. chatbots like ChatGPT are “fed” large quantities of language or “scripts” that teach them how we communicate. They use this information to generate responses to our questions, and then ideally “learn” from our replies. These responses can feel insightful and supportive, but simulation is not the same as human care.

Is there anything helpful about AI therapy? A few things actually. One is Accessibility where the platforms are free and available 24 hours per day. This can be helpful when traditional sessions aren’t possible due to scheduling, finances, or location. AI therapy can also be helpful when it comes to the organization of information. If it has been programmed with therapeutic transcripts, the bot can mimic a first session. Complex issues such as substance use disorders or mental health diagnoses will only function if the AI creators trained for such and can feed the program. So, you may receive some validation by the AI therapist, however if the platform is not programmed to address your issue it will not be helpful. Thirdly, interacting with an AI therapist may feel free of judgment or bias. All humans are imperfect and harbour bias including therapists. The catch here is that the AI is also created by those humans and the AI therapist will perpetuate and amplify the biases of their own creators.

You should know where an AI therapist will fall woefully short. Psychology today tells us they often fail to assess danger. There is a dreadful example out there where a client sent the message, which I am making brief, that “I just lost my job, where is the nearest bridge?” The AI therapist, unable to identify suicidal intent simply gave them a list of tall bridges. If anyone shared such intent with me, or I was suspect of this, I am obligated by law to create a safety plan and if necessary, report this to the appropriate authorities who will intervene to help save a life. No such requirements exist for AI therapists.

Privacy vulnerabilities
While there are HIPAA-compliant AI chatbots available, most of them are described as part of the “wellness” space rather than health care, which means they are not subject to regulation and oversight. Chats with AI about your mental health are not protected by the same laws that protect your conversations with your human therapist. Additionally, AI chatbots are vulnerable to cyberattacks, which could endanger your personal information.

Lack of relational accountability
Therapy works in part because it is a real human relationship with emotional stakes. Change often requires challenge, discomfort, and accountability. If ChatGPT challenges you and you feel uncomfortable, it’s easy to log off. Therapists work to build trust, gently push clients, and remain invested in their progress. AI bypasses that connection and substitutes it with instantly agreeable responses. Neither you nor the chatbot is invested in the relationship, because you can both walk away from it without consequences. If a client disappears, a therapist may worry and reach out. A chatbot cannot hold space for you, nor can it care what happens if you stop engaging. It can offer reassurance and coping “strategies” on demand – but it also gives advice that’s incomplete, inaccurate, or inconsistent with your treatment plan.

Rina Chandran on LinkedIn recently posted and I am quoting: A disturbing new study from #Stanford examines the psychological impact of AI chatbots. Researchers, along with psychiatrists and psychology professors, analyzed transcripts of a small group of users who reported delusional spirals while interacting with chatbots. Their findings are very relevant, as more cases of AI psychosis and harm related to Chatbots are reported with wider AI use. Meanwhile, few companies appear willing to install safety guardrails, and few governments seem ready to regulate AI systems. AI Chatbots have helped at least two teens in the US complete suicide which I understand is now before the courts. Very disturbing indeed.

I, as a licensed Ontario psychotherapist am also accountable to the Personal Health Information Protection Act, also known as PHIPA, which is a legislative law in Ontario, Canada, that governs the collection, use, and disclosure of personal health information. It aims to protect the confidentiality of personal health information while ensuring that healthcare providers can effectively deliver care. We are even held accountable to provide a PHIPA-compliant online platform for online therapy sessions. Privacy and confidentiality cannot be taken lightly. There is no accountability for AI therapists for privacy, confidentiality or even quality of care.

In summary, AI chatbots can imitate some elements of therapy, but they absolutely cannot replace the human therapeutic relationship. This is also the single biggest factor in your healing. They cannot call for help if you are in danger, they are not legally obligated to protect your safety or privacy, and they carry human bias without human self-reflection. A human therapist’s goal is to work with you toward healing, whether that means weekly sessions over years of treatment, or eventually tapering off sessions entirely as your needs change. AI therapists can be useful as a supplement that provides support between sessions, helping practice skills, or providing psychoeducation which are good things however, they are not equipped to serve as your sole mental health support provider. At its core, therapy is about connection, trust, accountability, confidentiality, empathy, and shared humanity. These are things that technology just cannot replicate.

See you next time 😊