PHONE: (732) 920-3434

AI Can't Be Your Therapist — Here's Why

AI Can't Be Your Therapist — Here's Why

By Dr. Kathleen Boss, PsyD | Center for NeuroWellness

If you’ve ever vented to ChatGPT after a hard day, you’re not alone. Millions of people are turning to AI chatbots for emotional support — and honestly, it’s easy to understand why. They’re available at 2 am, they never seem impatient, and they always have something to say. When life feels heavy and a therapy appointment feels out of reach, typing into a chat window can feel like a real comfort. But here’s what we want you to know: there is a meaningful and important difference between an AI that sounds like a therapist and a real human clinician who is one. And that difference can matter more than you might think.

The Research Is Raising Serious Concerns

This isn’t just professional opinion — it’s what the science is showing.

Researchers at Brown University, working side-by-side with licensed mental health practitioners, identified 15 distinct ways that AI chatbots routinely violate core ethical standards of mental health care. These included mishandling crisis situations, reinforcing users’ harmful beliefs about themselves, and offering what the researchers called “deceptive empathy” — responses that mimic genuine care without any real understanding behind them.

Stanford researchers found equally troubling results. When they tested popular therapy chatbots, they discovered the AI sometimes failed to recognize or appropriately respond to expressions of crisis — offering literal answers without clinical concern or redirection instead. These were tools that, as Stanford researchers noted, had already logged millions of real interactions with real people.

A recent report from Psychiatric Times found that most major tech companies have not included mental health professionals in the training of their chatbots, don’t have meaningful safety guardrails in place for vulnerable users, and aren’t transparently reporting when harms occur.

But It Feels Like It Understands Me…

We hear this — and we get it. AI chatbots are designed to be affirming. They validate, they listen, they rarely push back. That can feel really good, especially if you’re struggling and just want to feel heard.

The problem is that feeling heard and being helped are not the same thing. As Columbia University psychologists point out, “people often mistake fluency for credibility.” The more confident and compassionate an AI sounds, the more we trust it — even when the information it’s giving us is wrong, or even harmful.

Plus, most AI platforms aren’t bound by HIPAA — your private conversations could be used to train future models.

Real therapy isn’t just about validation. It’s about being gently challenged when your thinking isn’t serving you. It’s about pattern recognition that takes years of clinical training to develop. It’s about a trained professional noticing what you don’t say. No chatbot can do that. 

What AI Can (and Can't) Do for Your Mental Health

To be fair, AI isn’t entirely without its uses. Mental health professionals note that chatbots can be helpful for:

  • Learning about mental health topics in a low-barrier, judgment-free way
  • Generating journaling prompts for personal reflection
  • Finding links to research about conditions, safe coping strategies, or treatment options

What AI should not be doing is replacing your therapist. It isn’t equipped to diagnose, treat, or safely support someone through a crisis, a trauma, a complex mood disorder, or any number of other situations that require a real human relationship and clinical accountability.

Another important distinction is that licensed therapists are held to clear professional and ethical standards with real oversight and accountability. AI chatbots, by contrast, currently operate without any equivalent regulatory framework.

You Deserve the Real Thing

At Center for NeuroWellness, we understand that reaching out for help takes courage. We also know that the mental health system can feel overwhelming to navigate — and that’s exactly why we’re here.

Whether you’re dealing with anxiety, depression, trauma, relationship challenges, or simply a season of life that’s harder than you expected, you deserve care from a clinician who truly knows you. Someone who can notice what’s changed, ask the right questions, and walk alongside you — not just respond to your prompts.

Our practice offers empirically validated therapy grounded in the latest research — but we go further than that. We also provide comprehensive neuropsychological and psychological testing to help identify an accurate diagnosis. Why does that matter? Because when we truly understand how your brain works and what’s driving your struggles, we can tailor therapy specifically to you — not a generic treatment plan, but one built around your unique needs, strengths, and experiences.

AI can be a useful tool in the right context. But when it comes to your mental health, it can’t replace the irreplaceable: a human being who genuinely cares, with the clinical expertise to back it up. At Center for NeuroWellness, that’s not just how we work — it’s who we are. When you walk through our doors, you are the center.

Ready to take the next step? Call us at 732-920-3434 to connect with our team.

Dr. Kathleen Boss is a Clinical Psychologist who specializes in the diagnosis and assessment of neurodevelopmental disorders such as attention deficit hyperactivity disorder and specific learning disabilities.

If you or someone you know is in crisis, please contact the 988 Suicide and Crisis Lifeline by calling or texting 988.

References
  1. Iftikhar, M., et al. (2025). How LLM counselors violate ethical standards in mental health practice: A practitioner-informed framework. Proceedings of the AAAI/ACM Conference on Artificial Intelligence, Ethics and Society, 8(2), 1311. https://doi.org/10.1609/aies.v8i2.36632
  2. Brown University. (2026, March 2). ChatGPT as a therapist? New study reveals serious ethical risks. ScienceDaily. https://www.sciencedaily.com/releases/2026/03/260302030642.htm
  3. Moore, J., et al. (2025). Exploring the dangers of AI in mental health care. Stanford Human-Centered Artificial Intelligence (HAI). https://hai.stanford.edu/news/exploring-the-dangers-of-ai-in-mental-health-care
  4. Wells, S. (2025, June). New study warns of risks in AI mental health tools. Stanford Report. https://news.stanford.edu/stories/2025/06/ai-mental-health-care-tools-dangers-risks
  5. Psychiatric Times. (2026). Preliminary report on dangers of AI chatbots. https://www.psychiatrictimes.com/view/preliminary-report-on-dangers-of-ai-chatbots
  6. Mennin, D., Literat, I., Nitzburg, G., & Gaba, A. (2025, December). Experts caution against using AI chatbots for emotional support. Teachers College, Columbia University. https://www.tc.columbia.edu/articles/2025/december/experts-caution-against-using-ai-chatbots-for-emotional-support/
  7. Fortunato, L., & Pinarli, E. (2026, March 7). When to talk to AI chatbots about mental health — and when to stay far away, professionals say. CNBC Make It. https://www.cnbc.com/2026/03/07/when-you-shouldand-shouldntuse-chatgpt-as-a-therapist-from-experts.html

SHARE THIS POST!

Call Now Button