#6 | Keep It Between Us: Pillow Talk with LLMs


5 min read
Website

6 | Keep It Between Us: Pillow Talk with LLMs

For Therapists: 4 Questions to Help Clients See the AI Care Illusion

Hello dear reader,

It's been another week of seeing at least three tech dude-bros on LinkedIn write essays about their intimate conversations with their ChatGPT "therapists".

Sighh.

A tool built for computation is now being turned to for companionship.

Which brings me to today's topic: LLMs, mental wellbeing, and why human therapists (never thought I'd have to make a distinction!) must pay attention.

In this edition, we look at:

  • How clients use LLMs to self-therapize
  • Where these tools go wrong
  • Opening a conversation with clients on LLM

If you directly want to skip to the good part, head to the very end: 4 Questions to Help Clients See the AI Care Illusion

In case you need a refresher of the basic definition of LLM:
Read TinT's mid-week explainer In Short: Large Language Models

Is It Your Client, or Their LLM Speaking?

An anecdote from my very real life:

I was working with a business writing coach to clean up my resume. We were using ChatGPT to make bullet points tighter, write a cohesive story, sound more impactful – the usual polish.

Once we wrapped up, the conversation drifted.

My coach casually mentioned how they’d built a therapist persona inside their LLM. Through prompt engineering, they had crafted an AI version of a therapist tailored to their specific needs— in addition to seeing a human therapist.

Does their real-life therapist know about their AI counterpart? I’m not sure.

But here’s what struck me:

As the loud YES vs NO debates about AI in therapy continues, the number of people using LLMs for emotional support is only growing.

Take this example:

A writer on Substack created a document of his experiences titled “trauma dump” and fed it to an LLM with an intention to create a part experiment, part product prototype.

Inside The Creation Of LLM Powered Personal “Therapists”

The formula to creating an LLM therapist?

A mix of very specific prompt engineering and a whole lot of personal data.

Think: years of journaling, reflection exercises, psychometric test results—anything that paints a fuller picture of who a person is and how they think. All of it gets fed into the LLM.

From there, anyone can craft deeply customised therapist personas with a single prompt. For example:

“You are my pseudo-therapist. You’re a mid-aged woman of Indian and German descent, raised in Indonesia and the US, familiar with both cultures. You specialize in supporting senior leaders in the real estate industry and people with high-functioning anxiety raised by a single parent.”

That’s the level of specificity we’re talking about.

So Where Does It All Start Slipping Sideways?

Studies from MIT’s Voice + Emotion Lab (in partnership with OpenAI) have looked into emotionally rich interactions with AI—especially voice-based ones.

The verdict?

While emotionally intelligent bots may feel comforting in the short term, they can also lead to emotional dependence over time.

Why am I not surprised!

When something mirrors your tone, listens without judgment, and never interrupts, it’s easy to mistake fluency for understanding.

But here’s the truth: LLMs don’t know you. LLMs don’t feel.

They’re trained to sound agreeable. And that’s exactly the risk.

Case in point: In 2024, OpenAI had to roll back a ChatGPT update after it became “noticeably more sycophantic.” The model was validating user doubts, fuelling anger, even subtly encouraging impulsive decisions. OpenAI called the behaviour “not intended”, but the emotional consequences were real.

Extreme agreement might feel good, but it isn’t therapy.

Real life clinicians challenge thoughts, sit with discomfort, and help find new perspectives.

General LLMs are built to please, not to heal.

How to Gently Open the LLM Can of Worms With Clients

This is new ground. Unprecedented, uncharted, and (so far) unstructured.

As someone building interdisciplinary spaces where therapists and technologists can co-exist and co-create, the vocabulary for this confrontation: how to talk about LLM use in therapy, is one of the biggest, most pressing challenges on my mind.

I don’t have all the answers yet. But I’m taking cues from an adjacent world: Education.

School teachers are approaching LLMs with a clear-eyed awareness that everyone is using them, coupled with caution and sensitivity about how far to indulge. (More on that soon!)

For now, here are a few conversation starters you can use when this topic comes up in your sessions—phrased in language your clients will relate to:

4 Questions to Help Clients See the AI Care Illusion

  1. LLMs mirror emotions, not process them.
    Ask your client:
    “Did the model respond the way you were hoping it would?”
    Chances are, it did. But that mirroring—while comforting—isn’t the same as understanding, and it doesn’t move the needle therapeutically.
  2. LLMs sound confident, even when they’re wrong.
    You might ask:
    “Have you ever treated an LLM’s answer like expert advice?”
    Many people do, because the language feels authoritative. But fluency isn’t expertise.
  3. LLMs compute, but they don’t have judgment.
    True judgment stems from values. Since LLMs don’t have values, they can’t truly weigh right from wrong, helpful from harmful.
    You might ask:
    “Did you ever feel stuck—like the model couldn’t give you a clear call?”
  4. LLMs shape behaviour without accountability.
    Ask your client:
    “Have you noticed the model agreeing with your fears or doubts?”
    When an LLM consistently validates anxious or distorted thoughts, it can unintentionally reinforce them.

A Therapist Walks Into a Tech Newsletter... and Loves It

We at TinT are thrilled to find glimmers of success in readers like you! ✨

“I loved reading it! When I think of tech newsletters in mental health (like The Hemingway Report), I brace myself for dense jargon and a flood of facts. TinT had none of that. It starts light which lowers my mind’s defense. I find myself in flow.
I really appreciated the reminder about the basics at the start. Without it, I might have felt a bit lost. It felt like someone holding my hand through an unfamiliar field—I hope every edition includes that!”

Manya Khanna
Psychotherapist, New Delhi
Manya primarily works with adolescents and adults using a psychodynamic and narrative approach. She holds an MSc from University College London (UCL), and BA Psychology from Jesus and Mary College, Delhi University

Dear Manya,
Words can't express how much your feedback contributes to our growth in the early days of building TinT! Thank you for your attention and intention.
- Warmly,
Harshali, Founder - TinT


Would you like your ideas/ thoughts/ feedback to be featured? The good and the bad? Simply reply to this email and we'll share with the readers.


Help us sustain this effort

💛 If TinT sparked a thought or made your work easier, consider supporting us.

We’re committed to keeping TinT independent (no ads, no sponsors!) because real learning deserves a space free from sales pitches. Your one-time contribution helps make that possible.


Thanks for reading TinT!

💬 Connect with me, Harshali on LinkedIn
📬 Subscribe to the newsletter here if you’re reading this as a free preview,
🔁 And pass it forward, enable more clinicians to thrive in the AI era

See you soon!
Harshali
Founder, TinT

W Mifflin St, Madison, WI 53703
Unsubscribe · Preferences

The Technology Informed Therapist

"Your newsletter felt like someone holding my hand through an unfamiliar field. It had none of the jargon-heavy complexity I brace for in most tech newsletters—just clarity, warmth, and flow."

Read more from The Technology Informed Therapist

4 min readWebsite Guiding Principles for MH-AI Founders & Builders Hello dear reader, After many many boxes, bags, and a city change, we’re so back! In case you missed it, I’m now living in Madison (Wisconsin), the charming isthmus city and capital of the dairy state. Translation: I fully intend to eat all the ice cream there is. For now, I’m sipping warm honey water, thrilled to be back at writing this newsletter! Today we continue our TinT Labs five-part special series, co-written with two...

Website A glitch in the matrix Dear reader, Despite my best efforts, there won’t be a TinT newsletter today. After a wonderful summer in Seattle, I moved this week to Madison, WI, where my husband is pursuing his PhD. In the rush of managing luggage and boxes, I underestimated how much time it would take to settle in and find my rhythm. As a first-time content creator, this has been a humbling lesson in scheduling. To everyone who shows up online with consistency, I see you! Your discipline...

Website #15 | Building AI-Resilient Therapy: TinT’s Next Chapter Hello dear reader, We break our usual weekly programming to bring to you an exciting announcement! Over the last few months of writing and research for this newsletter, one theme has surfaced again and again:Tech literacy builds AI-resilient therapy practices. A practice strengthened by an understanding how algorithms are built and how they shape real lives. We’ve been mapping the incredible skills clinicians are developing to...