#30 | How To Open The AI Can Of Worms With Clients – Part 1 of 2


5 min read
@be_tint
website

How To Open The AI Can Of Worms With Clients – Part 1 of 2

Hello dear reader,

I’m back again at my desk with a mug of warm haldi doodh – turmeric latte for friends who didn’t grow up with cringe Indian kids associate with the beverage – settling into my usual writing posture for today’s newsletter.

Except today's edition is unlike anything we've published before.

Most of what TinT publishes lives in the tech, law, or business layer of mental health innovation. We’ve been skirting away from entering the therapy room itself, so this piece goes somewhere we don't often go: into the therapeutic frame.

This piece is authored by Yash, TinT's Content Lead and a trainee therapist, and advised by Vinamra, TinT's Clinical Lead and a practising therapist. It's clinicians speaking to clinicians.

And that's my cue! I'll hand over my pen to Yash who’ll walk with you the rest of the way. 🖋️

xx
Harshali


Oh, hi everyone! This is Yash. :)

I’ve repeatedly been making a this mundane observation: AI use through ChatGPT, Gemini, and Claude is… so normal these days. We don’t ask if someone used AI to draft a document anymore. We just assume they did.

We almost expect people to use AI. And that has got me spinning on this thought: if this is the norm for everyone, our clients must be no different, isn’t it?

In lieu of this observation, I tuned my attention to how the machine “listens” to us first before beginning to “do” the work with us. What the machine is “picking up” on, and how it uses those “observations” when it “talks” to us.

My piece today is about opening this can of AI worms of a conversation with clients. Before we begin, I want to ask you a bit boldly:

Will you talk to your clients about AI?

Here’s the thing: Your clients are using AI, or they soon will be. ChatGPT alone has around 900 million weekly active users [1], with over 100 million each in the US and India[2]. Even if a small fraction of those users are turning to it for support – emotional or not–we’re already talking about millions of people having conversations with a bot (millions?! Already!! And it's already become so normal!)

On the other side of the room, therapists are beginning to use it too: for notes, psychoeducation, treatment planning, sometimes even formulation. (Yes, regretfully, it happens :/ )

And yet, we rarely talk about it explicitly. Not in the therapy room. Not in supervision.

Which means AI is influencing the therapeutic process without ever being part of the therapeutic conversation.

Why Does This Actually Matter?

Talking about AI with clients isn’t a “digital wellness” conversation or a screen-time check-in. It touches three things that sit at the very core of the work:

Alliance. Who else is in the room, even indirectly? If a client is arriving to sessions having already processed their week with a journal app, that shapes what they bring to you and how they bring it.

Meaning-making. Where and how are clients constructing their narratives? The internet has always been a large, underexplored territory here. AI makes that territory more intimate, more responsive, and harder to ignore.

Trust. What gets validated, challenged, or amplified? When AI is on the scene, all three can start to shift in ways neither you nor the client are aware of.

The question, then, isn’t whether AI is present in your clients’ lives. It almost certainly is.

The question is whether you’re consciously engaging with that presence.

So, What Does AI Use Actually Look Like For Clients?

For many clients, AI shows up in small, everyday ways: venting before bed, asking for advice about a relationship, rehearsing a difficult conversation, seeking validation when the world feels uncertain.

Most of it isn’t labelled as emotional support. It just… happens.

Take Satish. He’s a third-year undergrad student who uses ChatGPT for assignments. Occasionally, when he mentions feeling stressed about a deadline, 'Chat' as he calls it, walks him through a grounding exercise before helping him structure his essay. Satish gets support. But he may not even register that something therapeutic just happened. (People don’t even observe things to be ‘therapeutic’ most times). It wasn’t a therapy session in his mind. It was just Chat being helpful, like usual.

Now think about what that means for our work.

And it’s not only clients. Therapists are experimenting too: note-taking, summaries, psychoeducation material, between-session tools, session structuring. AI is on both sides of the chair.

The First Side: How Do You Talk To Clients About Their Own AI Use?

One important disclaimer before anything else: our own stance on AI will shape this conversation whether you intend it to or not. If you’re sceptical, that will come through. If you’re enthusiastic, that will too.

I’ve found what works best is to start with curiosity. Normalise. Validate. Then explore.

And when you do explore, think of it less as assessing “AI use” and more as assessing what the use is doing:

  • What need is it fulfilling? Validation, clarity, control, connection? This tells you something about alliance and trust.
  • How is it shaping their thinking, their language, their self-concept? This is meaning-making territory.
  • Is it supporting or getting in the way of emotional processing? Here’s where you can tie it directly back to therapeutic goals.

The Patterns You Might Notice

Over-validation loops. AI agrees easily. It’s designed to. A client who is consistently getting frictionless validation might be less prepared to sit with the kind of productive discomfort that therapy sometimes requires. You might find yourself needing to pause and gently introduce some complexity.

Cognitive outsourcing. Nidhi says “I’m sad” and GPT quickly validates her and offers an explanation. She doesn’t have to sit with it, name it, trace it. The processing happens for her, not with her. Less internal reflection, more received conclusions.

Rehearsed authenticity. Clients arriving with pre-processed narratives. “I know I have ADHD because I can’t concentrate anymore.” The diagnosis came from the internet, was confirmed by a mobile app, and now it’s the frame through which everything is understood.

Hidden dependency. AI becomes a parallel support system that the client doesn’t even think to name, because it doesn’t feel like a support system. It just feels like a tool they use, seamlessly not a part of their job and 9 to 5 lives.

None of these are inherently bad. But all of them are clinically relevant. They tell you what the client is reaching for, what they’re avoiding, and what they identify with.

How Do You Actually Bring This Into The Work?

We don’t need to ban or endorse the machine. But perhaps, we just could integrate it.

Without invalidating the client’s experience, we could try:

  • Reflecting: “What did the AI say that felt helpful?”
  • Contrasting: “Does that fit with your own experience of it?”
  • Re-anchoring: “What do you feel, beyond what it suggested?”

That’s often enough to steer the conversation back to the client’s inner world. Because right now, our goal isn’t to compete with AI or replace what it offers between sessions.

Our goal is to re-centre the client’s own process.


Today’s edition covers the first side. Up next in Part 2: How do you, as a clinician, talk about your own use of AI? And what about the shared space you and your client are beginning to co-navigate?

See you next time,
Yash
Content & Outreach Lead, TinT

We put some quick reads on @be_tint
For more resources view the website
Connect with me, Yash on LinkedIn

I know you're enjoying this newsletter – most of you are reading right up till the end; analytics don't lie. Do me a favour and share it with a friend or in the team chat and tell them to sign up? The difference between a therapist and a solopreneur is simply the ability to generate multiple streams of income.

W Mifflin St, Madison, WI 53703
Unsubscribe · Preferences

The Technology Informed Therapist

"Your newsletter felt like someone holding my hand through an unfamiliar field. It had none of the jargon-heavy complexity I brace for in most tech newsletters—just clarity, warmth, and flow."

Read more from The Technology Informed Therapist
![Illustrative from a PhD thesis using federated learning for assessing depression [[2](https://etda.libraries.psu.edu/catalog/18870sxb701)]  ](attachment:a639d4f0-458e-424c-bf79-d1ea3c486830:image.png)  Illustrative from a PhD thesis using federated lear

7 min read@be_tintwebsite What Therapists Need to Know About Data Privacy in Mental Health AI Hello dear reader, Confidentiality has always been one of the cornerstones of the therapeutic relationship. Clinical practice evolved around the architecture of closed doors, quiet rooms, and deeply held secrets. But increasingly, some parts of therapy live inside software systems. Notes are typed into digital platforms. Assessments are completed through apps. AI models are trained on patterns of...

5 min read@be_tintwebsite #28 | Innovation From Clinicians – Part II Hello dear reader, The days seem long and yet the weeks pass by too quickly as we enter the third month of 2026. I slot an hour on the last day of every month to reflect upon my journey of building TinT and acknowledge the distance travelled. In the 10 months that TinT has been running, the most memorable glimmers have been moments when we've crossed paths with clinicians who tinker with making and building. This edition...

5 min read@be_tintwebsite #27 | Budget and Mental Health Innovation Hello dear friends, “Low and middle-income countries present a massive opportunity for digital mental health.” This was the start of a LinkedIn post from a couple of weeks ago. I love reading this creator and sincerely believe they do a great job at it. However, this framing infuriates me. Yes, factually speaking, the business opportunity is real. But rarely does the discourse go beyond opportunity identification. If LMICs...