#13 | TinT Labs | PhDs Apply a Socio-Technical Lens to AI in Therapy


X min read
Website

PhDs Apply a Socio-Technical Lens to AI in Therapy

The first collaboration of TinT Labs is with us two illustrious PhD Researchers who study AI and Mental Health.

Aseem Srivastava investigates how large language models can be engineered not just for accuracy, but also for cultural and psychological sensitivity in real-world counseling interactions. He’s currently a postdoc at MBZUAI in Abu Dhabi. He completed his PhD from IIT Delhi.

Vasundhra Dahiya works in Critical AI studies and algorithmic accountability. Informed by a socio-technical lens, her research focuses on understanding how cultural values, AI design, and AI policy shape each other. She is a is a doctoral researcher at the IIT-Jodhpur.

We're on the 1st of a five series of TinT Labs x PhD Researchers. Let's dive in!

What Does PhD Research Uncover about AI and Mental Health

When people hear “AI for mental health,” they often imagine computer scientists shaking hands with psychiatrists to build a perfect digital therapist. The truth? The ecosystem is far bigger, far messier and far more human.

According to researcher Vasundhra, AI products offering mental health assistance aren’t just technical artifacts. They sit inside an intricate web of stakeholders: developers, clinicians, social workers, behavioural scientists, policymakers, journalists, medical ethicists, and yes — even students experimenting with open-source data.

This is public interest technology, shaped by the social, cultural, medical, and legal contexts that surround it.

Culture Isn’t an Add-On,
It’s the Foundation

Aseem’s research makes a crucial point: cultural context isn’t optional. People express distress, seek help, and engage with technology differently depending on their backgrounds. Systems that ignore this risk alienating the very people they aim to serve.

Through psycholinguistic modeling, his work shows how tone, discourse, and language cues can guide AI toward safer, more empathetic responses. But that only happens if systems are co-designed with clinicians, community workers, and cultural experts, not built in isolation.

Both researchers state: this is why interdisciplinary collaboration and long-term research are essential. We need open, shared benchmarks for cultural adaptation, not just one-size-fits-all AI products that claim to provide mental health services rushing to market.

Binary Attitude Toward AI: Doom or Hype?

Today’s conversation about AI swings between extremes:

“AI will replace human therapists everywhere.”

“AI is just a fancy autocomplete, a stochastic parrot

Both miss the point.

LLMs don’t “understand” you — they generate the most statistically likely response based on their training data. That’s not intelligence, and it’s certainly not empathy.

This distinction is especially critical in mental health, where traits, cultural norms, and psycholinguistic cues carry dedicated meaning.

Calling an AI a “therapist” or “friend” obscures what it really is: a probabilistic text generator, not a companion with lived experience or intent.

Better Data Will Save Us All?

Contrary to popular belief, more data doesn’t automatically make models better. Researchers consistently emphasise that context makes data and algorithms truly useful.

Context isn’t just technical — it’s socio-cultural, linguistic, and disciplinary. For example, a model designed for therapy must account for cultural norms, language nuances, and clinical knowledge to be meaningful and safe.

This perspective challenges the assumption that a one-size-fits-all, general-purpose model can meet public needs. It challenges a large majority of products available in the market that are simply MH wrappers on general LLMs.

Domain-specific training of the model, it's cultural adaptation, and co-design with clinicians are essential.

Even seemingly harmless AI “companions” that only converse can cause real harm through misinterpretation, dependency, or unaddressed triggers.

AI for mental health is not just a technical achievement.

It’s a socio-technical system, one that must integrate cultural knowledge, clinical expertise, and ongoing human oversight from day one.

Who Builds AI, Who Gets Left Out?

From data collection to annotation and model design to deployment, power dynamics shape everything.

Who defines the problem?

Who benefits from the solution?

Who is excluded from its design and impact?

AI systems don’t arise from nowhere. They rely on human labor from therapists and therapeutic workers who contribute their knowledge to annotators and engineers.

Keeping people at the center means recognising these structures of power and being intentional about who gets a voice at each stage.

What Answers Should Therapists Be Seeking

Instead of obsessing over whether “AI will replace me,” clinicians might ask:

  • What process is this system automating?
  • What data was used to build it, and how was it annotated?
  • Whose cultural, social, or clinical knowledge informed its design?
  • What does it claim to solve – what can it realistically do, as well as not do?

These questions move the conversation away from hype and fear toward literacy, transparency, and accountability.

This concludes the first in a series of 5 pieces co-written by Machine Learning researchers Aseem Srivastava and Vasundhra Dahiya.

In the next edition, Aseem and Vasundhra write a letter to mental healthcare clinicians.

What do Machine Learning researchers want to say to clinicians? Be sure to read next Sunday!

Know someone who's interested in inter-disciplinary collaborations between clinicians and technologists?

Pass this newsletter along to them!


📬 This newsletter is free but by subscribing you tell us that you are interested in what we have to say!
Support us by subscribing here.

💬 Connect with me, Harshali on LinkedIn.

See you soon,
Harshali
Founder, TinT

W Mifflin St, Madison, WI 53703
Unsubscribe · Preferences

The Technology Informed Therapist

"Your newsletter felt like someone holding my hand through an unfamiliar field. It had none of the jargon-heavy complexity I brace for in most tech newsletters—just clarity, warmth, and flow."

Read more from The Technology Informed Therapist

4 min readWebsite Guiding Principles for MH-AI Founders & Builders Hello dear reader, After many many boxes, bags, and a city change, we’re so back! In case you missed it, I’m now living in Madison (Wisconsin), the charming isthmus city and capital of the dairy state. Translation: I fully intend to eat all the ice cream there is. For now, I’m sipping warm honey water, thrilled to be back at writing this newsletter! Today we continue our TinT Labs five-part special series, co-written with two...

Website A glitch in the matrix Dear reader, Despite my best efforts, there won’t be a TinT newsletter today. After a wonderful summer in Seattle, I moved this week to Madison, WI, where my husband is pursuing his PhD. In the rush of managing luggage and boxes, I underestimated how much time it would take to settle in and find my rhythm. As a first-time content creator, this has been a humbling lesson in scheduling. To everyone who shows up online with consistency, I see you! Your discipline...

Website #15 | Building AI-Resilient Therapy: TinT’s Next Chapter Hello dear reader, We break our usual weekly programming to bring to you an exciting announcement! Over the last few months of writing and research for this newsletter, one theme has surfaced again and again:Tech literacy builds AI-resilient therapy practices. A practice strengthened by an understanding how algorithms are built and how they shape real lives. We’ve been mapping the incredible skills clinicians are developing to...