#14 | TinT Labs | Dear Clinicians: A Letter from AI PhDs


3 min read
Website

Dear Clinician: A Letter from AI PhDs

Hello reader,

It's a sunny Sunday in my corner of the world!

Today we continue with our TinT Labs' five part special series co-written by two illustrious PhD Researchers who study AI and Mental Health.

And right now, it's time for part 2.


When we were brainstorming for this piece, my nudge to Aseem (Postdoc) and Vasundhra (PhD) was simple:

Knowing what you know about AI and it's evolution, do you have a message for mental healthcare providers?

What we have today for you is a message that comes from a culmination of years of research, professional experience, deep reflection, and vision for a safer future.

Dear Clinician,

AI is being pitched as the next big thing in therapy and mental health care. But before you accept any claim, pause and ask the most important question:

What, exactly, is being automated?

You don’t need to decode every algorithm or read every line of code to evaluate AI powered tools critically.

What you do need is a socio-technical lens — an understanding of how data, design choices, and cultural assumptions shape the technology being sold to you and your patients.

As a caregiver in an increasingly algorithmic world, here’s how to develop a socio-technical lens:

  • Data practices matter. How do AI powered mental health support offering companies collect and process information? Whose voices are represented — and whose are excluded?
  • Modeling choices are not neutral. AI tools reflect the social, cultural, and linguistic biases of their designers as much as the data they are trained on.
    What comprised of the training data for this product?
    Who helped shape it?
  • Privacy and security are just the beginning. Algorithmic risks go deeper, intertwining with culture and context in ways that technical safeguards alone cannot fix.
    Who is this product being marketed and sold to?
    Who is using it? What is the intended use, how does that contrast with the way it is realistically being used?

Ask sharper questions — not only about if these tools “replace therapists,” but about what they realistically achieve, for whom, and at what cost.

Who do AI systems serve well? Who might they leave behind?

Your patients are already experimenting with AI tools, whether you endorse them or not. The more you understand what these systems do — and how they do it — the better you can contextualize, guide, and protect those in your care.

This awareness stretches from surface-level features like chatbot 'personas' and scripted empathy, to deeper systemic issues like biased outputs, inappropriate responses, or dangerously persuasive and inaccurate advice.

Finally, acknowledge that biases in therapy can get translated into biases in AI models.

Algorithmic bias isn’t magic; it grows from the same social biases that affect humans, compounded by design decisions and modeling frameworks.

In short: stay curious, stay critical, and stay informed. AI may change how care is delivered, but it’s your expertise — not an algorithm — that keeps mental health services in any form humane, contextual, and safe.

At your service,
Researchers Aseem and Vasundhra

Aseem Srivastava investigates how large language models can be engineered not just for accuracy, but also for cultural and psychological sensitivity in real-world counseling interactions. He’s currently a postdoc at MBZUAI in Abu Dhabi. He completed his PhD from IIT-Delhi, India.

Vasundhra Dahiya works in Critical AI studies and algorithmic accountability. Informed by a socio-technical lens, her research focuses on understanding how cultural values, AI design, and AI policy shape each other. She is a is a doctoral researcher at the IIT-Jodhpur, India.


Think of a person who would be interested in AI, therapy, and the future of mental health. Would they like to read this piece?

This newsletter is free and by subscribing, you tell us that you are interested and here to know more!

📬 Support us by subscribing here.

💬 Connect with me, Harshali on LinkedIn.

See you soon,
Harshali
Founder, TinT

W Mifflin St, Madison, WI 53703
Unsubscribe · Preferences

The Technology Informed Therapist

"Your newsletter felt like someone holding my hand through an unfamiliar field. It had none of the jargon-heavy complexity I brace for in most tech newsletters—just clarity, warmth, and flow."

Read more from The Technology Informed Therapist

4 min readWebsite Guiding Principles for MH-AI Founders & Builders Hello dear reader, After many many boxes, bags, and a city change, we’re so back! In case you missed it, I’m now living in Madison (Wisconsin), the charming isthmus city and capital of the dairy state. Translation: I fully intend to eat all the ice cream there is. For now, I’m sipping warm honey water, thrilled to be back at writing this newsletter! Today we continue our TinT Labs five-part special series, co-written with two...

Website A glitch in the matrix Dear reader, Despite my best efforts, there won’t be a TinT newsletter today. After a wonderful summer in Seattle, I moved this week to Madison, WI, where my husband is pursuing his PhD. In the rush of managing luggage and boxes, I underestimated how much time it would take to settle in and find my rhythm. As a first-time content creator, this has been a humbling lesson in scheduling. To everyone who shows up online with consistency, I see you! Your discipline...

Website #15 | Building AI-Resilient Therapy: TinT’s Next Chapter Hello dear reader, We break our usual weekly programming to bring to you an exciting announcement! Over the last few months of writing and research for this newsletter, one theme has surfaced again and again:Tech literacy builds AI-resilient therapy practices. A practice strengthened by an understanding how algorithms are built and how they shape real lives. We’ve been mapping the incredible skills clinicians are developing to...