"Your newsletter felt like someone holding my hand through an unfamiliar field. It had none of the jargon-heavy complexity I brace for in most tech newsletters—just clarity, warmth, and flow."
It’s a sunny morning here in Seattle as I settle down to write this one.
Foundation models fascinate me. Machines that seem to think when you speak to them? Wild.
This fascination is still fairly new. Early last year, while I was knee-deep building a clinical supervision tool, I came across a headline that stopped me in my tracks:
SlingshotAI was the new darling of Mental HealthTech. Sleek, ambitious, audacious. Every founder wanted to be them. Every VC wanted in. A few of my friends even applied to work there.
It wasn’t just that they were building a foundation model. It was who they were building it for: a niche, complex, emotionally charged domain like mental health. The idea itself felt bold, maybe even a bit reckless. Would it become an all-knowing therapist trained on every disorder in the DSM?
If you need a quick refresher on what foundation models are, how they’re trained, and where they show up in Mental HealthTech, check out the last issue of TinT (a 3-minute read).
So What’s the Big Fuss Around FMs in Psychology
At first glance, the argument feels simple: Don’t fund AI tools that try to replace therapists.
But dig deeper, and you’ll find layers thick with nuance, ethics, and urgency.
The Case For
Companies like SlingshotAI argue that specialised models are far better suited for mental health than general-purpose tools like ChatGPT.
Their co-founder makes a blunt case: General models keep offering risky advice to vulnerable users. One famously told a distressed un-alive themself. Another encouraged heroin use.
The uncomfortable truth? People are already turning to AI for emotional and therapeutic support. Whether it’s ethical or not. Whether we like it or not.
If that’s the reality, the argument goes: Let’s build safer, smarter, more clinically aware models instead of leaving it to chance.
SlingshotAI's cofounder makes a case for the Illinois state to consider AI in therapy
The Counter
Look closer at that newspaper clipping. It says the company “worked with over 40 clinicians for months…”
Wait — forty? Only forty?
If you’re building a “know-it-all” tool meant to speak the language of an entire profession, you’d expect a much larger group of clinicians shaping it — not a few dozen..
Foundation Models need massive amounts of data to generate even remotely useful responses, let alone ones that are contextually sensitive, ethically sound, and clinically appropriate. A single clinician takes years — often decades — to master their craft. How does a machine learn the same from a handful of contributors? And that’s without even touching on the murky waters of data sourcing, representation, consent, and what it means to train an AI on something as private as therapy conversations.
Then there’s the money.
OpenAI’s GPT-3 reportedly cost $5–12 million to train, most of it spent on development and compute. So when a private company pours that kind of cash into a foundation model for a niche field like mental health…
You can bet they’ll want returns. Big ones.
So the question isn’t just “Is this clinically safe?”
It’s also: Who gets to build these models? Who benefits from them? And at what cost to the profession they aim to serve?
So Are My Notes At Risk Of Becoming Fodder for Machines?
Short answer: It depends.
Most industry-specific models need industry-specific data to train on. A legal AI model might ingest hundreds of thousands of contracts. A radiology model might need a million chest X-rays to learn what TB looks like. And for a mental health model to be truly useful? It needs real-world therapy data.
That kind of data can’t just be scraped from the internet. It has to come from real practitioners, real cases, and that means it should come with informed consent.
So if you’re using a specific Mental HealthTech software in your practice, go read their Terms of Use. Carefully.
On the other hand, if you’re using a general LLM like ChatGPT to rewrite or clarify notes for personal use, it’s less clear-cut.
You’re feeding data into a model trained for writing, not therapy. But if you’re using the free version, your data might still be used to improve the model unless you’ve opted out.
If You’ve Read This Far, Here’s a Little Treat
If You've Read This Far...
...then here's a little treat.
A recent paper from researchers at the Beijing Institute of Technology, published in the June 2025 edition of Medicine Plus, offers a crisp and illuminating look into how foundation models are being built for digital mental health.
It’s short. It’s sharp. It’s worth zooming in on your coffee break.
PS. Our Team Is Growing! 💌
We’ve got a brilliant new brain on the team!
Joining TinT as our Editorial Researcher is Aditi, a designer with a background in social-emotional learning. She’s built her own SEL program (KhilKhil Labs) and has worked at the intersection of design, emotional development, and education—for both children and adults.
As we grow, I (Harshali, founder of TinT) have made it a personal mission to keep this space sponsorship-free and ad-free.
Not that featuring ads is bad—but because this is meant to be an interdisciplinary space for learning and collaboration. I believe real work happens when no one is being sold to, and everyone shows up on our own accord, representing the best of our profession and curiosity.
If you’ve found value in TinT, I’d be so grateful if you’d consider a one-time donation. Your support helps fund the research, writing, and heart that goes into every edition.
💬 Connect with me, Harshali on LinkedIn 📬 Subscribe to the newsletter here if you’re reading this as a free preview, 🔁 And pass it along if it sparked something, it helps more than you know.
Therapists deserve clarity on AI! TinT delivers insights and community to help you stay grounded and ahead
"Your newsletter felt like someone holding my hand through an unfamiliar field. It had none of the jargon-heavy complexity I brace for in most tech newsletters—just clarity, warmth, and flow."
5 min readWebsite How To Evaluate LLMs For Crisis Response Paid AI work opportunity for clinicians in this newsletter. Scroll to bottom highlight. Hello dear reader, We’ve hit the six-month mark of this newsletter. I’d promised myself I’d send it out every weekend, a promise I’ve now broken twice. Once while moving cities, and again last weekend on a trip to Hawaii accompanying my partner for his conference. Both times, I assumed I could keep my routine going on despite big life...
5 min readWebsite #21 | Clinical OpEd: What should therapists look for when evaluating AI tools? Hello dear reader, It’s Saturday evening. The sunset hues outside my windows are magnificent. My weekend writing rhythm has set in. Today’s piece is unlike any before. This is the first time Tint feature’s a therapists’ own writing. I met Daniel Fleshner a few months ago via LinkedIn. I feel I should send the LinkedIn team a thank you note for all the meaningful connections LI has sparked for me...
4 min readWebsite How a Clinician in India Is Using AI to Train Therapists Hello dear reader, It's an autumn Saturday morning. This is my first fall in the States, rather - my first fall ever. I'm a tropical girl living in a temperate world, and the changing season have been such a joy to witness. 🍂 While we're talking of joys, one of the joys of writing TinT is meeting clinicians who are quietly redefining what it means to be tech-informed. Today’s story is about one such clinician. Jai...