#3 | In Short: Foundation Models


#3 | In Short: Foundation Models

Welcome to In Short, a midweek dip into the machine learning lexicon. Every mid-week (it's Thu still for me!), I’ll unpack a single technical term in just a few paragraphs, no jargon jungle. Then, come weekend, we’ll stretch the idea out: where it shows up in the wild, what it means for your work, and why it might matter more than it seems.

First up is...

What are Foundation Models?

Think of a Foundation Model like a well-trained intern who’s read everything—every psychology textbook, every case file (ethically, of course) and every journal article. They've gone above and beyond in their homework and even read all the venting and ranting from clinicians and clients on Reddit forums and social apps. They're not a specialist yet, but they’ve absorbed a huge amount of general information (some knowledge, some not) across many topics.

Foundation Models are large AI models (like GPT-4 or BERT) trained on massive, diverse datasets to learn general patterns in language, images, or other types of data.

Brains, Bytes, and Billions of Words

Foundation models are built using a type of machine learning called deep learning, where artificial neural networks (inspired loosely by the brain) learn patterns across billions of pieces of data.

They're trained using massive amounts of computing power—often for weeks or months—by tech companies with the resources to do so. Some of the most well-known foundation models include GPT-4 (OpenAI), Gemini (Google), Claude (Anthropic), LLaMA (Meta), and Mistral (Mistral AI).

These models aren’t just used in mental health, they're powering a wide variety of tools in law, medicine, customer service, education, and finance.

Most of these models are trained on publicly available text (books, websites, code) and then adapted through fine-tuning or prompt engineering to serve specific industries—including tools you're now seeing in therapy work.

Many foundation models are multimodal—meaning they can handle not just text, but also images, audio, or video, and connect meaning across them. That’s why some tools can now summarise a therapy session transcript, analyse tone of voice, or even describe a client’s facial expression, all within the same system.

What This Means for Your Practice

These models form the “foundation” for many of the AI tools you may be encountering like like a chatbot supporting clients between sessions, a tool summarising session notes, or an app analysing tone of voice in therapy.

The core model wasn’t trained just for your context—it was trained for everything. That’s both a strength (general understanding, flexible) and a limitation (it needs refining to be clinically useful).

In Short (Yes, Again): The Takeaway

Foundation Models are not the final answer, they are the starting point. A powerful generalist tool that gets turned into a specialist with the right data, context, and human oversight. If an app claims to use AI in therapy, it likely started with a foundation model under the hood.

The upcoming edition will dive deeper into Foundation Models and the role they play in the Mental Health space along with their application and challenges.

Since Tech+Mental Health is Your Jam...

I thought I'd share this event with you if you didn't know about it already! Tanmoy Goswami of Sanity by Tanmoy is single handedly organising PostxCode, an inter-disciplinary event to be held on Sunday July 20th, 2025.

I had the honour of recording a little piece for the event (being in Seattle has it's advantages, the cruel PST-IST time difference is not one of them).

If you're not from India or have aren't specifically up to date on the latest in the Indian Mental HealthTech scene, that's fine too.

Come on and join in to learn how the landscape is changing in South Asia and perhaps you might find some patterns relevant to your region.


Thanks for reading In Short! If you found this helpful, share it with a colleague who’s curious about AI but allergic to jargon.

💬 Connect with me, Harshali on LinkedIn
📬 Subscribe to the newsletter here if you’re reading this as a free preview,
🔁 And pass it along if it sparked something, it helps more than you know.

See you this weekend for the long(er) read!
Harshali
Founder, TinT

W Mifflin St, Madison, WI 53703
Unsubscribe · Preferences

The Technology Informed Therapist

"Your newsletter felt like someone holding my hand through an unfamiliar field. It had none of the jargon-heavy complexity I brace for in most tech newsletters—just clarity, warmth, and flow."

Read more from The Technology Informed Therapist

5 min readWebsite How To Evaluate LLMs For Crisis Response Paid AI work opportunity for clinicians in this newsletter. Scroll to bottom highlight. Hello dear reader, We’ve hit the six-month mark of this newsletter. I’d promised myself I’d send it out every weekend, a promise I’ve now broken twice. Once while moving cities, and again last weekend on a trip to Hawaii accompanying my partner for his conference. Both times, I assumed I could keep my routine going on despite big life...

5 min readWebsite #21 | Clinical OpEd: What should therapists look for when evaluating AI tools? Hello dear reader, It’s Saturday evening. The sunset hues outside my windows are magnificent. My weekend writing rhythm has set in. Today’s piece is unlike any before. This is the first time Tint feature’s a therapists’ own writing. I met Daniel Fleshner a few months ago via LinkedIn. I feel I should send the LinkedIn team a thank you note for all the meaningful connections LI has sparked for me...

Therapist and AI Innovator Jai Arora

4 min readWebsite How a Clinician in India Is Using AI to Train Therapists Hello dear reader, It's an autumn Saturday morning. This is my first fall in the States, rather - my first fall ever. I'm a tropical girl living in a temperate world, and the changing season have been such a joy to witness. 🍂 While we're talking of joys, one of the joys of writing TinT is meeting clinicians who are quietly redefining what it means to be tech-informed. Today’s story is about one such clinician. Jai...