#14 | TinT Labs | Dear Clinicians: A Letter from AI PhDs


3 min read
Website

Dear Clinician: A Letter from AI PhDs

Hello reader,

It's a sunny Sunday in my corner of the world!

Today we continue with our TinT Labs' five part special series co-written by two illustrious PhD Researchers who study AI and Mental Health.

And right now, it's time for part 2.


When we were brainstorming for this piece, my nudge to Aseem (Postdoc) and Vasundhra (PhD) was simple:

Knowing what you know about AI and it's evolution, do you have a message for mental healthcare providers?

What we have today for you is a message that comes from a culmination of years of research, professional experience, deep reflection, and vision for a safer future.

Dear Clinician,

AI is being pitched as the next big thing in therapy and mental health care. But before you accept any claim, pause and ask the most important question:

What, exactly, is being automated?

You don’t need to decode every algorithm or read every line of code to evaluate AI powered tools critically.

What you do need is a socio-technical lens — an understanding of how data, design choices, and cultural assumptions shape the technology being sold to you and your patients.

As a caregiver in an increasingly algorithmic world, here’s how to develop a socio-technical lens:

  • Data practices matter. How do AI powered mental health support offering companies collect and process information? Whose voices are represented — and whose are excluded?
  • Modeling choices are not neutral. AI tools reflect the social, cultural, and linguistic biases of their designers as much as the data they are trained on.
    What comprised of the training data for this product?
    Who helped shape it?
  • Privacy and security are just the beginning. Algorithmic risks go deeper, intertwining with culture and context in ways that technical safeguards alone cannot fix.
    Who is this product being marketed and sold to?
    Who is using it? What is the intended use, how does that contrast with the way it is realistically being used?

Ask sharper questions — not only about if these tools “replace therapists,” but about what they realistically achieve, for whom, and at what cost.

Who do AI systems serve well? Who might they leave behind?

Your patients are already experimenting with AI tools, whether you endorse them or not. The more you understand what these systems do — and how they do it — the better you can contextualize, guide, and protect those in your care.

This awareness stretches from surface-level features like chatbot 'personas' and scripted empathy, to deeper systemic issues like biased outputs, inappropriate responses, or dangerously persuasive and inaccurate advice.

Finally, acknowledge that biases in therapy can get translated into biases in AI models.

Algorithmic bias isn’t magic; it grows from the same social biases that affect humans, compounded by design decisions and modeling frameworks.

In short: stay curious, stay critical, and stay informed. AI may change how care is delivered, but it’s your expertise — not an algorithm — that keeps mental health services in any form humane, contextual, and safe.

At your service,
Researchers Aseem and Vasundhra

Aseem Srivastava investigates how large language models can be engineered not just for accuracy, but also for cultural and psychological sensitivity in real-world counseling interactions. He’s currently a postdoc at MBZUAI in Abu Dhabi. He completed his PhD from IIT-Delhi, India.

Vasundhra Dahiya works in Critical AI studies and algorithmic accountability. Informed by a socio-technical lens, her research focuses on understanding how cultural values, AI design, and AI policy shape each other. She is a is a doctoral researcher at the IIT-Jodhpur, India.


Think of a person who would be interested in AI, therapy, and the future of mental health. Would they like to read this piece?

This newsletter is free and by subscribing, you tell us that you are interested and here to know more!

📬 Support us by subscribing here.

💬 Connect with me, Harshali on LinkedIn.

See you soon,
Harshali
Founder, TinT

W Mifflin St, Madison, WI 53703
Unsubscribe · Preferences

The Technology Informed Therapist

"Your newsletter felt like someone holding my hand through an unfamiliar field. It had none of the jargon-heavy complexity I brace for in most tech newsletters—just clarity, warmth, and flow."

Read more from The Technology Informed Therapist

5 min readWebsite How To Evaluate LLMs For Crisis Response Paid AI work opportunity for clinicians in this newsletter. Scroll to bottom highlight. Hello dear reader, We’ve hit the six-month mark of this newsletter. I’d promised myself I’d send it out every weekend, a promise I’ve now broken twice. Once while moving cities, and again last weekend on a trip to Hawaii accompanying my partner for his conference. Both times, I assumed I could keep my routine going on despite big life...

5 min readWebsite #21 | Clinical OpEd: What should therapists look for when evaluating AI tools? Hello dear reader, It’s Saturday evening. The sunset hues outside my windows are magnificent. My weekend writing rhythm has set in. Today’s piece is unlike any before. This is the first time Tint feature’s a therapists’ own writing. I met Daniel Fleshner a few months ago via LinkedIn. I feel I should send the LinkedIn team a thank you note for all the meaningful connections LI has sparked for me...

Therapist and AI Innovator Jai Arora

4 min readWebsite How a Clinician in India Is Using AI to Train Therapists Hello dear reader, It's an autumn Saturday morning. This is my first fall in the States, rather - my first fall ever. I'm a tropical girl living in a temperate world, and the changing season have been such a joy to witness. 🍂 While we're talking of joys, one of the joys of writing TinT is meeting clinicians who are quietly redefining what it means to be tech-informed. Today’s story is about one such clinician. Jai...