AI Therapy: Is it Worth a Try? PART 1
Before we jump in, let me just start by saying that I am a huge fan of AI. I realized after initially finishing this blog that it gave off an almost “Terminator-Esque” vibe, but I like to think of it more like a digital sidekick who never steals your snacks or forgets your birthday, rather than an evil sci-fi villain.
While AI can sometimes raise concerns, particularly in creative fields, I do believe that when used thoughtfully, it can be a real asset. It’s all about using it in the right way to enhance your work and life.
Is AI therapy safe? A look at privacy and confidentiality
Let’s say you open up about a personal struggle to an AI chat; maybe it’s anxiety about your job, relationship issues or that sense of loneliness that’s been lingering just a little too long. It seems safe because it’s just you and your screen, right?
But have you ever stopped to wonder where that information is going? Who owns it? And what happens to it after the chat ends?
These are the kind of questions that we need to ask when it comes to using AI tools for mental health support. While talking to an AI might feel easier because we are not in the room with a therapist, giving us a sense of anonymity, the reality behind the scenes is much more complex. Unlike traditional therapy, which is bound by strict legal and ethical rules involving confidentiality, AI mental health tools exist in a grey zone.
So, if you are trusting them with your most vulnerable thoughts, it’s important to understand what that means.
The illusion of privacy?
Most AI tools, whether it’s a Chatbot, such as ChatGPT or mental health specific apps, use conversations to train and improve their models. Although this doesn’t necessarily mean that a human is reading your chat log, it does still mean that your input can be stored, analyzed, or used to enhance the system’s performance. Unless the tool explicitly states that it will not save your data (and even then, it’s worth verifying), your words might not be as private as they feel.
In traditional therapy, confidentiality is protected by law. This means that therapists in Canada must adhere to professional standards under legislation, such as PHIPA (Personal Health Information Protection Act). This ensures that your data is stored securely, access is restricted, and sharing information requires your informed consent.
AI systems are not therapists, therefore they are not regulated in the same way, even if they’re designed to “sound” like one.
Who owns what you say?
Ownership is another major concern. For many AI platforms, anything you write becomes part of a data set that can then be used to train or improve future models. This means that your words, even deeply personal ones, could shape how the system responds for other people. This means that you are not just a user, you are also a data point. Mining your experience for improved engagement, will always be part of the AI model.
This becomes more troubling when you consider commercial interests. Some companies may sell anonymized data to third parties for research or marketing. Even if your name is not attached, your emotional experiences might be feeding algorithms you know nothing about.
What are you actually agreeing to?
Before you type anything into an AI platform, it’s worth reading the privacy policy in terms of service. Yes, really.
Although these documents can often be long and boring, they outline whether your data will be collected, stored, shared, or used to train future versions of the AI. Some apps may make your data seem anonymous however that does not mean that it cannot be re-identified, especially if sensitive details are shared.
As an example, we’ll take an individual who is sharing information about suicidal thoughts or domestic violence; in a therapy setting, that would trigger a safety protocol. Your experience would be handled with thoughtful and empathetic care, you would be supported in finding the right supports and collaborating on the best safety plan for you.
With AI, responses can be varied, and the data could be kept without anyone ever intervening. This unfortunately, creates some serious ethical questions which we will cover in another blog. A thought for now could be, should a system that cannot take responsibility or respond with direct action be allowed to support our most vulnerable populations during a crisis?
In all fairness to AI platforms, some have taken the steps to include disclaimers such as “this is not a real therapist”, adding emergency contact information, suggestions for other supports, or explaining the limitations on what they will engage with. But many still fall short of offering the kind of safe, responsive and protected environments that real therapy provides.
How does AI compare to human therapy?
It’s worth highlighting how different AI therapy is from what happens in a licensed therapy session. Therapists also keep notes, yes, but those are protected by confidentiality laws, and ethical codes. They are not used to train anyone. They are not sold. They exist to support your care.
If a therapist needs to break confidentiality, such as a case involving the imminent risk of harm, they are required to follow legal and clinical guidelines to keep you safe. An AI, no matter how smart it may sound, cannot take on that kind of responsibility. They cannot call emergency services or offer a safety plan in a crisis.
Therefore, it cannot be held accountable in the same way.
A central tenet of psychotherapy is a focus on doing and saying what’s in “the client’s best interest”. This looks different for everyone depending on their history, their social location and their access to resources. A therapist builds safety and rapport with a client through multiple sessions that all work together to weave a tapestry of that person’s reality. An AI, on the other hand, only responds to limited information given from the user plus compiled data across a wide range of users who live very different realities.
A piece of “advice” for one user might not be the best piece of advice for another.
So where do we go from here?
This certainly doesn’t mean that AI has no place in mental health. In fact, I believe that some people find AI tools to be very helpful for practicing coping skills, journaling, or working through emotions when human therapy isn’t accessible. But we still need to approach this with our eyes wide open.
If you do choose to use an AI for mental health support, there are some things that I would encourage you to consider:
Check whether the tool follows any data protection laws, that way you know what your conversations will be used for.
Do they have a clear and transparent privacy policy?
Be clear on whether you want to share highly sensitive or identifying details as part of your conversations.
Also, if you are going to use it for mental health support, use it in addition to therapy not as a replacement for it.
The bottom line:
AI therapy tools can offer support, however, they aren’t confidential in the same way that a therapist is. They do not offer the same protection, regulation, or ethical oversight. So, if you do choose to use them, please do so with awareness and caution. Your mental health deserves the same level of care and respect as your physical health, and that includes knowing where your private thoughts are going when you hit “send”.
Here some other resources for you to read:
AI: Right or Wrong? 4 Ethical Considerations of AI in Therapy – UpHeal
AI Therapist: Data Privacy – Santa Clara University scu.edu
Helping Clients Navigate AI Privacy Risks – Counseling Today counseling.org