#18 | TinT Labs | Series Finale: Researchers on Building Responsible MH-AI


4 min read
Website

Series Finale: Researchers on Building Responsible MH-AI

Hello dear reader,

It’s been a home-chores kind of Sunday for me. Laundry done, groceries stocked, bills paid, lunches packed.

I usually write TinT early in the morning, but tonight, just two hours before the day ends, here I am: lamp on, tea warm, excited as ever.

Because honestly, writing TinT is still my favourite part of Sunday.

Today, we wrap up our TinT Labs series, co-written with the brilliant researchers Vasundhra Dahiya and Aseem Srivastava.

Before we close, let’s revisit the biggest takeaways of the past few editions.

Bridging Fields, Stakeholders, and Promises

In #13 | TinT Labs | PhDs Apply a Socio-Technical Lens to AI in Therapy we saw that AI and mental health isn’t just about algorithms, it’s about creating care that is accessible, safe, and meaningful.

Achieving this requires a dialogue between care practitioners, designers and developers, and health and tech policy makers.

A socio-technical perspective is key. Research in this space lives in the gaps between disciplines, between stakeholders, and between what’s promised and what’s actually possible.

From a WhatsApp triage bot to sophisticated AI companions, digital interventions require thoughtful collaboration, not just code. Expertise from academia, clinical practice, social sciences, design, startups, NGOs, and regulators must come together to ensure impact is real and responsible.

Therapists Are Essential in AI Research

Aseem’s research demonstrates how psycho- and socio-technical rigour can shape AI for mental health.

His studies on peer engagement in online communities (PLOS ONE 2025), culturally attuned LLMs for counseling summaries (EMNLP 2024), and knowledge-guided dialogue generation (WWW 2023) show that lived context, empathy, and cultural nuance are not optional—they’re foundational.

Crucially, none of this is possible without collaboration with mental health professionals.

Technology alone won’t create truthful care. Bridging gaps, translating between disciplines, and designing systems on people’s terms is what makes AI meaningful in this space.

Therapists Are Essential in AI Research.

This is exactly what we explore in #14 | TinT Labs | Dear Clinicians: A Letter from AI PhDs#14 | TinT Labs | Dear Clinicians: A Letter from AI PhDs.

We Say It Again: Inter-disciplinary Is The Way To Go

AI for public good must reflect the diversity of the public.

This requires cross-disciplinary expertise: psychotherapists, psychiatrists, developers, designers, lawyers, policy makers—and yes, researchers who can translate between these worlds.

Collaboration allows us to add nuance, incorporate cultural insights, and build solutions that truly serve the people they promise to help.

In #17 | TinT Labs | Guiding Principles for MH-AI Founders & Builders we address founders and builders who are the helm of innovation, urging them to slow down, resist shortcuts, and build interdisciplinary.

PhD researchers like Vasundhra and Aseem operationalise this lens, mapping sociocultural experiences to algorithmic models and creating culturally sensitive, clinically informed AI tools.

PhD researchers are uniquely positioned to understand the diverse these public and incorporate cross-disciplinary perspectives for mental health and AI.

From psychotherapists and psychiatrists, to developers and designers, to lawyers and policy makers: the community needs to talk to each other. This cross-collaboration is not optional today, it is essential.

As in this newsletter, it is this collaboration of computer scientists, a psychologist, and a designer all working towards a hope of making AI more useful, accessible, and accountable.

We say it again: Inter-disciplinary is the way to go.

One Big Takeaway

If you remember one thing from this series, let it be this:

AI in mental health only works when it is human-centered, context-aware, and collaboratively built..

Meet The Researchers

Vasundhra Dahiya

Vasundhra’s PhD work is born out of a storytelling workshop called Parables of AI by Data & Society.

In her doctoral project, Vasundhra has interviewed makers of AI therapy chatbots and platforms, and currently she is conducting a user study on how users perceive and navigate mental and emotional support. You can find more about the study here.

She works extensively in bridging interdisciplinary boundaries and creating critical AI/Data/algorithm literacies for everyone.

With Lavanya Dahiya and Dr Dibyadyuti Roy, she co-founded CLAIM [Critical Lens on AI in/from the Majority World], a reading and advocacy group that engages in criticality for/in-all-things AI. Write to her at [dahiya.2@iitj.ac.in] to join them.

Aseem Srivastava

In his work, Aseem continues to explore how AI Companions can be made culturally adaptive, psycho-linguistically informed, and safe for mental health contexts.

If you are a researcher, practitioner, or technologist interested in co-developing context-aware evaluation frameworks (either in startups or in academic research), longitudinal impact studies, or open cultural adaptation resources, Aseem would love to connect with you.

You can find his publications, open tools, and ongoing projects at as3eem.github.io. He also welcome interdisciplinary collaborators to join him in making AI for mental health more personal, accountable, and accessible.

That's all for today, friends.

In the forthcoming editions, I'll be taking a look at what prevents mental healthcare providers from participating in the AI conversation.

Meanwhile, connect with me on LinkedIn. I’m finally getting better at tackling my posting block and showing up more regularly (and authentically) there!

See you next Sunday,
Harshali
Founder, TinT

W Mifflin St, Madison, WI 53703
Unsubscribe · Preferences

The Technology Informed Therapist

"Your newsletter felt like someone holding my hand through an unfamiliar field. It had none of the jargon-heavy complexity I brace for in most tech newsletters—just clarity, warmth, and flow."

Read more from The Technology Informed Therapist
AI may be sitting at the base of therapeutic frame today. How much we tend to notice is the question.

min readRead online #31 | How To Open The AI Can Of Worms With Clients – Part 2 of 2 Hi there, this is Yash again! New name in this newsletter (I know), so here’s a refresher: I’m the content and outreach guy at TinT, while also a trainee therapist at TISS, Mumbai. It’s my last few weeks as a trainee, and I’m cherishing my days here, as I sit down to write this week’s edition. *Last time, we talked about the first side of this conversation: how to explore your clients’ use of AI without...

5 min read@be_tintwebsite How To Open The AI Can Of Worms With Clients – Part 1 of 2 Hello dear reader, I’m back again at my desk with a mug of warm haldi doodh – turmeric latte for friends who didn’t grow up with cringe Indian kids associate with the beverage – settling into my usual writing posture for today’s newsletter. Except today's edition is unlike anything we've published before. Most of what TinT publishes lives in the tech, law, or business layer of mental health innovation. We’ve...

![Illustrative from a PhD thesis using federated learning for assessing depression [[2](https://etda.libraries.psu.edu/catalog/18870sxb701)]  ](attachment:a639d4f0-458e-424c-bf79-d1ea3c486830:image.png)  Illustrative from a PhD thesis using federated lear

7 min read@be_tintwebsite What Therapists Need to Know About Data Privacy in Mental Health AI Hello dear reader, Confidentiality has always been one of the cornerstones of the therapeutic relationship. Clinical practice evolved around the architecture of closed doors, quiet rooms, and deeply held secrets. But increasingly, some parts of therapy live inside software systems. Notes are typed into digital platforms. Assessments are completed through apps. AI models are trained on patterns of...