Hello dear reader,
It’s been a home-chores kind of Sunday for me. Laundry done, groceries stocked, bills paid, lunches packed.
I usually write TinT early in the morning, but tonight, just two hours before the day ends, here I am: lamp on, tea warm, excited as ever.
Because honestly, writing TinT is still my favourite part of Sunday.
Today, we wrap up our TinT Labs series, co-written with the brilliant researchers Vasundhra Dahiya and Aseem Srivastava.
Before we close, let’s revisit the biggest takeaways of the past few editions.
Bridging Fields, Stakeholders, and Promises
In #13 | TinT Labs | PhDs Apply a Socio-Technical Lens to AI in Therapy we saw that AI and mental health isn’t just about algorithms, it’s about creating care that is accessible, safe, and meaningful.
Achieving this requires a dialogue between care practitioners, designers and developers, and health and tech policy makers.
A socio-technical perspective is key. Research in this space lives in the gaps between disciplines, between stakeholders, and between what’s promised and what’s actually possible.
From a WhatsApp triage bot to sophisticated AI companions, digital interventions require thoughtful collaboration, not just code. Expertise from academia, clinical practice, social sciences, design, startups, NGOs, and regulators must come together to ensure impact is real and responsible.
Therapists Are Essential in AI Research
Aseem’s research demonstrates how psycho- and socio-technical rigour can shape AI for mental health.
His studies on peer engagement in online communities (PLOS ONE 2025), culturally attuned LLMs for counseling summaries (EMNLP 2024), and knowledge-guided dialogue generation (WWW 2023) show that lived context, empathy, and cultural nuance are not optional—they’re foundational.
Crucially, none of this is possible without collaboration with mental health professionals.
Technology alone won’t create truthful care. Bridging gaps, translating between disciplines, and designing systems on people’s terms is what makes AI meaningful in this space.
Therapists Are Essential in AI Research.
This is exactly what we explore in #14 | TinT Labs | Dear Clinicians: A Letter from AI PhDs#14 | TinT Labs | Dear Clinicians: A Letter from AI PhDs.
We Say It Again: Inter-disciplinary Is The Way To Go
AI for public good must reflect the diversity of the public.
This requires cross-disciplinary expertise: psychotherapists, psychiatrists, developers, designers, lawyers, policy makers—and yes, researchers who can translate between these worlds.
Collaboration allows us to add nuance, incorporate cultural insights, and build solutions that truly serve the people they promise to help.
In #17 | TinT Labs | Guiding Principles for MH-AI Founders & Builders we address founders and builders who are the helm of innovation, urging them to slow down, resist shortcuts, and build interdisciplinary.
PhD researchers like Vasundhra and Aseem operationalise this lens, mapping sociocultural experiences to algorithmic models and creating culturally sensitive, clinically informed AI tools.
PhD researchers are uniquely positioned to understand the diverse these public and incorporate cross-disciplinary perspectives for mental health and AI.
From psychotherapists and psychiatrists, to developers and designers, to lawyers and policy makers: the community needs to talk to each other. This cross-collaboration is not optional today, it is essential.
As in this newsletter, it is this collaboration of computer scientists, a psychologist, and a designer all working towards a hope of making AI more useful, accessible, and accountable.
We say it again: Inter-disciplinary is the way to go.
One Big Takeaway
If you remember one thing from this series, let it be this:
AI in mental health only works when it is human-centered, context-aware, and collaboratively built..
Meet The Researchers
Vasundhra Dahiya
Vasundhra’s PhD work is born out of a storytelling workshop called Parables of AI by Data & Society.
In her doctoral project, Vasundhra has interviewed makers of AI therapy chatbots and platforms, and currently she is conducting a user study on how users perceive and navigate mental and emotional support. You can find more about the study here.
She works extensively in bridging interdisciplinary boundaries and creating critical AI/Data/algorithm literacies for everyone.
With Lavanya Dahiya and Dr Dibyadyuti Roy, she co-founded CLAIM [Critical Lens on AI in/from the Majority World], a reading and advocacy group that engages in criticality for/in-all-things AI. Write to her at [dahiya.2@iitj.ac.in] to join them.
Aseem Srivastava
In his work, Aseem continues to explore how AI Companions can be made culturally adaptive, psycho-linguistically informed, and safe for mental health contexts.
If you are a researcher, practitioner, or technologist interested in co-developing context-aware evaluation frameworks (either in startups or in academic research), longitudinal impact studies, or open cultural adaptation resources, Aseem would love to connect with you.
You can find his publications, open tools, and ongoing projects at as3eem.github.io. He also welcome interdisciplinary collaborators to join him in making AI for mental health more personal, accountable, and accessible.
That's all for today, friends.
In the forthcoming editions, I'll be taking a look at what prevents mental healthcare providers from participating in the AI conversation.
Meanwhile, connect with me on LinkedIn. I’m finally getting better at tackling my posting block and showing up more regularly (and authentically) there!
See you next Sunday,
Harshali
Founder, TinT