#17 | TinT Labs | Guiding Principles for MH-AI Founders & Builders


4 min read
Website

Guiding Principles for MH-AI Founders & Builders

Hello dear reader,

After many many boxes, bags, and a city change, we’re so back!

In case you missed it, I’m now living in Madison (Wisconsin), the charming isthmus city and capital of the dairy state.
Translation: I fully intend to eat all the ice cream there is.

For now, I’m sipping warm honey water, thrilled to be back at writing this newsletter!

Today we continue our TinT Labs five-part special series, co-written with two brilliant researchers who study AI and mental health.

And now, 'tis time for part 3.

When we brainstormed this piece, I asked Aseem (Postdoc) and Vasundhra (PhD):

“Given what you know about AI and its evolution, what would you tell those at the helm of innovation?

What follows is their answer in the form of a letter, distilled from years of research, professional experience, and reflection. It’s both a call to action and a vision for building a safer future.

Dear founders and builders
of mental health AI,

We urge you to build collaboratively, market honestly, and engage critically.

  1. Define what you’re automating — and why. If the answer is unclear or fuzzy, pause and rethink the value you’re creating.
  2. Avoid the “move fast and break things” mindset. Ethical and social harms aren’t bugs to patch later. They’re preventable when you slow down and design with care.
  3. Be extremely critical of trends. Don’t rush to anthropomorphize your AI. Trends fade, but their impact on mental health is lasting.
  4. Build with an interdisciplinary team. Better data, models, design, business — and most importantly, better health outcomes — are worth the extra time.
  5. Don’t confuse “general-purpose” with “good enough”. Quick fixes are dangerous. Cultural adaptation, psycholinguistic awareness, and collaboration with clinicians are non-negotiable.
  6. Remember, you’re engineering human interactions, not just code. Culture, language, and context shape your model, and your model will shape them back.
  7. Involve users early. Participatory design and data justice are not afterthoughts. They’re foundations for safe, contextual systems.
  8. Be honest about your model’s limits. Accuracy alone isn’t enough, trust and positive health impact are the real indicators of success.
  9. Read widely across disciplines. Step beyond the tech circle. Humanities, journalism, and law will show you algorithmic harms you might miss.
  10. And finally, leave the field better than you found it. Open-source your datasets, share benchmarks, publish annotation guidelines. Technology best serves the public interest when its building blocks are open for others to improve.

These guidelines are reminder that the future of mental health tech is ours to shape, together.

You’re already part of that change – building the vocabulary, curiosity, and confidence to influence the tools of tomorrow.

I’d love for this message to travel far and wide!

💬 Copy and post the above image and tag me or this newsletter

📥 Download a print ready version of it for your desk or clinic wall

🔗 Share it with a founder, builder, or technologist in your circle

Have a lovely week ahead,

Harshali
Founder, TinT

W Mifflin St, Madison, WI 53703
Unsubscribe · Preferences

The Technology Informed Therapist

"Your newsletter felt like someone holding my hand through an unfamiliar field. It had none of the jargon-heavy complexity I brace for in most tech newsletters—just clarity, warmth, and flow."

Read more from The Technology Informed Therapist

5 min readWebsite How To Evaluate LLMs For Crisis Response Paid AI work opportunity for clinicians in this newsletter. Scroll to bottom highlight. Hello dear reader, We’ve hit the six-month mark of this newsletter. I’d promised myself I’d send it out every weekend, a promise I’ve now broken twice. Once while moving cities, and again last weekend on a trip to Hawaii accompanying my partner for his conference. Both times, I assumed I could keep my routine going on despite big life...

5 min readWebsite #21 | Clinical OpEd: What should therapists look for when evaluating AI tools? Hello dear reader, It’s Saturday evening. The sunset hues outside my windows are magnificent. My weekend writing rhythm has set in. Today’s piece is unlike any before. This is the first time Tint feature’s a therapists’ own writing. I met Daniel Fleshner a few months ago via LinkedIn. I feel I should send the LinkedIn team a thank you note for all the meaningful connections LI has sparked for me...

Therapist and AI Innovator Jai Arora

4 min readWebsite How a Clinician in India Is Using AI to Train Therapists Hello dear reader, It's an autumn Saturday morning. This is my first fall in the States, rather - my first fall ever. I'm a tropical girl living in a temperate world, and the changing season have been such a joy to witness. 🍂 While we're talking of joys, one of the joys of writing TinT is meeting clinicians who are quietly redefining what it means to be tech-informed. Today’s story is about one such clinician. Jai...