#17 | TinT Labs | Guiding Principles for MH-AI Founders & Builders


4 min read
Website

Guiding Principles for MH-AI Founders & Builders

Hello dear reader,

After many many boxes, bags, and a city change, we’re so back!

In case you missed it, I’m now living in Madison (Wisconsin), the charming isthmus city and capital of the dairy state.
Translation: I fully intend to eat all the ice cream there is.

For now, I’m sipping warm honey water, thrilled to be back at writing this newsletter!

Today we continue our TinT Labs five-part special series, co-written with two brilliant researchers who study AI and mental health.

And now, 'tis time for part 3.

When we brainstormed this piece, I asked Aseem (Postdoc) and Vasundhra (PhD):

“Given what you know about AI and its evolution, what would you tell those at the helm of innovation?

What follows is their answer in the form of a letter, distilled from years of research, professional experience, and reflection. It’s both a call to action and a vision for building a safer future.

Dear founders and builders
of mental health AI,

We urge you to build collaboratively, market honestly, and engage critically.

  1. Define what you’re automating — and why. If the answer is unclear or fuzzy, pause and rethink the value you’re creating.
  2. Avoid the “move fast and break things” mindset. Ethical and social harms aren’t bugs to patch later. They’re preventable when you slow down and design with care.
  3. Be extremely critical of trends. Don’t rush to anthropomorphize your AI. Trends fade, but their impact on mental health is lasting.
  4. Build with an interdisciplinary team. Better data, models, design, business — and most importantly, better health outcomes — are worth the extra time.
  5. Don’t confuse “general-purpose” with “good enough”. Quick fixes are dangerous. Cultural adaptation, psycholinguistic awareness, and collaboration with clinicians are non-negotiable.
  6. Remember, you’re engineering human interactions, not just code. Culture, language, and context shape your model, and your model will shape them back.
  7. Involve users early. Participatory design and data justice are not afterthoughts. They’re foundations for safe, contextual systems.
  8. Be honest about your model’s limits. Accuracy alone isn’t enough, trust and positive health impact are the real indicators of success.
  9. Read widely across disciplines. Step beyond the tech circle. Humanities, journalism, and law will show you algorithmic harms you might miss.
  10. And finally, leave the field better than you found it. Open-source your datasets, share benchmarks, publish annotation guidelines. Technology best serves the public interest when its building blocks are open for others to improve.

These guidelines are reminder that the future of mental health tech is ours to shape, together.

You’re already part of that change – building the vocabulary, curiosity, and confidence to influence the tools of tomorrow.

I’d love for this message to travel far and wide!

💬 Copy and post the above image and tag me or this newsletter

📥 Download a print ready version of it for your desk or clinic wall

🔗 Share it with a founder, builder, or technologist in your circle

Have a lovely week ahead,

Harshali
Founder, TinT

W Mifflin St, Madison, WI 53703
Unsubscribe · Preferences

The Technology Informed Therapist

"Your newsletter felt like someone holding my hand through an unfamiliar field. It had none of the jargon-heavy complexity I brace for in most tech newsletters—just clarity, warmth, and flow."

Read more from The Technology Informed Therapist
AI may be sitting at the base of therapeutic frame today. How much we tend to notice is the question.

min readRead online #31 | How To Open The AI Can Of Worms With Clients – Part 2 of 2 Hi there, this is Yash again! New name in this newsletter (I know), so here’s a refresher: I’m the content and outreach guy at TinT, while also a trainee therapist at TISS, Mumbai. It’s my last few weeks as a trainee, and I’m cherishing my days here, as I sit down to write this week’s edition. *Last time, we talked about the first side of this conversation: how to explore your clients’ use of AI without...

5 min read@be_tintwebsite How To Open The AI Can Of Worms With Clients – Part 1 of 2 Hello dear reader, I’m back again at my desk with a mug of warm haldi doodh – turmeric latte for friends who didn’t grow up with cringe Indian kids associate with the beverage – settling into my usual writing posture for today’s newsletter. Except today's edition is unlike anything we've published before. Most of what TinT publishes lives in the tech, law, or business layer of mental health innovation. We’ve...

![Illustrative from a PhD thesis using federated learning for assessing depression [[2](https://etda.libraries.psu.edu/catalog/18870sxb701)]  ](attachment:a639d4f0-458e-424c-bf79-d1ea3c486830:image.png)  Illustrative from a PhD thesis using federated lear

7 min read@be_tintwebsite What Therapists Need to Know About Data Privacy in Mental Health AI Hello dear reader, Confidentiality has always been one of the cornerstones of the therapeutic relationship. Clinical practice evolved around the architecture of closed doors, quiet rooms, and deeply held secrets. But increasingly, some parts of therapy live inside software systems. Notes are typed into digital platforms. Assessments are completed through apps. AI models are trained on patterns of...