#4 | Big Brains, Small Samples: Foundation Models


Big Brains, Small Samples: Foundation Models

Hi there, Reader

It’s a sunny morning here in Seattle as I settle down to write this one.

Foundation models fascinate me. Machines that seem to think when you speak to them? Wild.

This fascination is still fairly new. Early last year, while I was knee-deep building a clinical supervision tool, I came across a headline that stopped me in my tracks:

SlingshotAI was the new darling of Mental HealthTech. Sleek, ambitious, audacious. Every founder wanted to be them. Every VC wanted in. A few of my friends even applied to work there.

It wasn’t just that they were building a foundation model. It was who they were building it for: a niche, complex, emotionally charged domain like mental health. The idea itself felt bold, maybe even a bit reckless. Would it become an all-knowing therapist trained on every disorder in the DSM?

If you need a quick refresher on what foundation models are, how they’re trained, and where they show up in Mental HealthTech, check out the last issue of TinT (a 3-minute read).

So What’s the Big Fuss Around FMs in Psychology

At first glance, the argument feels simple: Don’t fund AI tools that try to replace therapists.

But dig deeper, and you’ll find layers thick with nuance, ethics, and urgency.

The Case For

Companies like SlingshotAI argue that specialised models are far better suited for mental health than general-purpose tools like ChatGPT.

Their co-founder makes a blunt case: General models keep offering risky advice to vulnerable users. One famously told a distressed un-alive themself. Another encouraged heroin use.

The uncomfortable truth? People are already turning to AI for emotional and therapeutic support. Whether it’s ethical or not. Whether we like it or not.

If that’s the reality, the argument goes: Let’s build safer, smarter, more clinically aware models instead of leaving it to chance.

The Counter

Look closer at that newspaper clipping. It says the company “worked with over 40 clinicians for months…”

Wait — forty? Only forty?

If you’re building a “know-it-all” tool meant to speak the language of an entire profession, you’d expect a much larger group of clinicians shaping it — not a few dozen..

Foundation Models need massive amounts of data to generate even remotely useful responses, let alone ones that are contextually sensitive, ethically sound, and clinically appropriate. A single clinician takes years — often decades — to master their craft. How does a machine learn the same from a handful of contributors? And that’s without even touching on the murky waters of data sourcing, representation, consent, and what it means to train an AI on something as private as therapy conversations.

Then there’s the money.

OpenAI’s GPT-3 reportedly cost $5–12 million to train, most of it spent on development and compute. So when a private company pours that kind of cash into a foundation model for a niche field like mental health…

You can bet they’ll want returns. Big ones.

So the question isn’t just “Is this clinically safe?”

It’s also: Who gets to build these models? Who benefits from them? And at what cost to the profession they aim to serve?

So Are My Notes At Risk Of Becoming Fodder for Machines?

Short answer: It depends.

Most industry-specific models need industry-specific data to train on. A legal AI model might ingest hundreds of thousands of contracts. A radiology model might need a million chest X-rays to learn what TB looks like. And for a mental health model to be truly useful? It needs real-world therapy data.

That kind of data can’t just be scraped from the internet. It has to come from real practitioners, real cases, and that means it should come with informed consent.

So if you’re using a specific Mental HealthTech software in your practice, go read their Terms of Use. Carefully.

What exactly have you agreed to share?

I break this down in more detail in this earlier TinT issue.

On the other hand, if you’re using a general LLM like ChatGPT to rewrite or clarify notes for personal use, it’s less clear-cut.

You’re feeding data into a model trained for writing, not therapy. But if you’re using the free version, your data might still be used to improve the model unless you’ve opted out.

If You’ve Read This Far, Here’s a Little Treat

If You've Read This Far...

...then here's a little treat.

A recent paper from researchers at the Beijing Institute of Technology, published in the June 2025 edition of Medicine Plus, offers a crisp and illuminating look into how foundation models are being built for digital mental health.

It’s short. It’s sharp. It’s worth zooming in on your coffee break.

PS. Our Team Is Growing! 💌

We’ve got a brilliant new brain on the team!

Joining TinT as our Editorial Researcher is Aditi, a designer with a background in social-emotional learning. She’s built her own SEL program (KhilKhil Labs) and has worked at the intersection of design, emotional development, and education—for both children and adults.

As we grow, I (Harshali, founder of TinT) have made it a personal mission to keep this space sponsorship-free and ad-free.

Not that featuring ads is bad—but because this is meant to be an interdisciplinary space for learning and collaboration. I believe real work happens when no one is being sold to, and everyone shows up on our own accord, representing the best of our profession and curiosity.

If you’ve found value in TinT, I’d be so grateful if you’d consider a one-time donation. Your support helps fund the research, writing, and heart that goes into every edition.

💛 Support TinT here — every bit counts!


Thanks for reading TinT!

💬 Connect with me, Harshali on LinkedIn
📬 Subscribe to the newsletter here if you’re reading this as a free preview,
🔁 And pass it along if it sparked something, it helps more than you know.

See you soon,
Harshali
Founder, TinT

W Mifflin St, Madison, WI 53703
Unsubscribe · Preferences

The Technology Informed Therapist

"Your newsletter felt like someone holding my hand through an unfamiliar field. It had none of the jargon-heavy complexity I brace for in most tech newsletters—just clarity, warmth, and flow."

Read more from The Technology Informed Therapist

4 min readWebsite Guiding Principles for MH-AI Founders & Builders Hello dear reader, After many many boxes, bags, and a city change, we’re so back! In case you missed it, I’m now living in Madison (Wisconsin), the charming isthmus city and capital of the dairy state. Translation: I fully intend to eat all the ice cream there is. For now, I’m sipping warm honey water, thrilled to be back at writing this newsletter! Today we continue our TinT Labs five-part special series, co-written with two...

Website A glitch in the matrix Dear reader, Despite my best efforts, there won’t be a TinT newsletter today. After a wonderful summer in Seattle, I moved this week to Madison, WI, where my husband is pursuing his PhD. In the rush of managing luggage and boxes, I underestimated how much time it would take to settle in and find my rhythm. As a first-time content creator, this has been a humbling lesson in scheduling. To everyone who shows up online with consistency, I see you! Your discipline...

Website #15 | Building AI-Resilient Therapy: TinT’s Next Chapter Hello dear reader, We break our usual weekly programming to bring to you an exciting announcement! Over the last few months of writing and research for this newsletter, one theme has surfaced again and again:Tech literacy builds AI-resilient therapy practices. A practice strengthened by an understanding how algorithms are built and how they shape real lives. We’ve been mapping the incredible skills clinicians are developing to...