Hello dear reader,
As I sat down to write this piece, it struck me: in a rare coincidence, the weather at my home in Seattle—gloomy, rainy, murky—matches the weather in my other home, Mumbai. A small connection through the elements.
This week I’ve been thinking deeply about how to approach today’s topic: fine-tuning.
What is Fine-Tuning, and Why Now?
The task of fine-tuning became a prominent technique with the rise of foundation models in the late 2010s.
When it comes to building specialised foundation models, there was a need to go a step further and include specialists - clinical experts.
The job of clinical fine-tuning for specialised foundation models is accessible to perhaps only the 95th and above percentile of clinicians worldwide.
When Do You Fine-Tune?
Most players in mental health AI today aren’t building models from scratch. Instead, they start with powerful open foundation models like GPT-4 (OpenAI), Claude (Anthropic), or Gemini (Google), and then fine-tune them to better fit clinical needs.
Fine-tuning usually comes after the initial model training. It follows a step called Reinforcement Learning from Human Feedback (RLHF), where human reviewers guide the model to produce more helpful and safe responses.
Once the foundation is aligned, additional layers are added: safety mechanisms, empathy scaffolds, tone calibration, and sometimes even therapeutic techniques like active listening or Socratic questioning.
The innovation lies not in reinventing the core technology, but in shaping and adapting it to mental health contexts through structured prompts, safety filters, and targeted fine-tuning.
What Could a Clinician’s Role in Fine-Tuning Look Like?
As the overlap between mental health and machine learning deepens, we’re beginning to see more clarity around the roles clinicians can play in shaping these tools.
Here’s what that might look like:
1. Curating therapy-specific data
- Gathering large, anonymised therapy transcripts/audio/video from clinicians across modalities (CBT, ACT, humanistic, etc.)
- Cleaning and annotating the data (e.g., identifying moments of reflection, validation, or open-ended questions)
- Labelling helpful vs. harmful responses
2. Designing learning objectives
- Defining model goals like:
“Generate empathetic reflections”
“Identify cognitive distortions”
“Maintain therapeutic alliance”
- Working with engineers to create input-output training pairs
3. Guiding the re-training process
- Steering re-training using specialised domain-specific datasets
- Evaluating outputs across contexts like grief, trauma, relationship conflict, etc.
4. Running safety & bias tests
- Stress-testing models with emotionally intense prompts
- Flagging culturally insensitive or pathologizing responses
- Suggesting new examples to iterate further
5. Writing documentation & guardrails
- Providing clinical rationale for model behavior
- Drafting “refusal” responses for sensitive topics like suicidality or diagnosis
Who Gets to Influence ML Models?
This level of interdisciplinary collaboration between clinicians and ML engineers remains rare and elite. Today, it’s mostly accessible to those with advanced research credentials and comfort with statistical modeling.
I came across a JD for an open role at a leading mental health tech company. Notice the eligibility criteria:
As mental health tech expands through innovation and mainstream adoption, more clinicians will need to play a role in shaping what’s being built.
And yet, fine-tuning requires something already trainable.
Right now, that means general-purpose foundation models. In the future, it will mean specialised foundation models—trained specifically for behavioural health, diagnostics, or even therapeutic techniques.
When that data becomes available, teams will have to clean it, interpret it, and fine-tune it for meaningful use.
Those teams must include clinicians.
So the real question becomes:
What Skills Will Clinicians Need in the Future?
We may soon see job roles like Clinical RLHF Expert or Therapeutic Model Trainer.
To prepare for these, clinicians might need to:
- Grow comfortable with structured data
- Develop annotation and analysis skills
- Learn how ML workflows operate
- Practice evaluating models with clinical lenses
If you were to ask me, “How do I go about developing these skills?”, I’ll admit, I don’t have a one-size-fits-all answer right now.
But that’s exactly the mission of TinT!
We’re here to build technology-informed therapists who grow with the industry: intentionally, not reactively.
Help us sustain this effort
💛 If TinT's mission aligns with your vision for your career, consider supporting us.
We’re committed to keeping TinT independent (no ads, no sponsors!) because real learning deserves a space free from sales pitches. Your one-time contribution helps make that possible.
TinT is growing — and how!
In the spirit of bringing clinicians into the heart of tech conversations, I’m thrilled to share that we’re now a team of three!
Please join me in welcoming Vinamra Vasudeva as our Clinical Lead for Strategic Initiatives.
Vinamra is a psychotherapist, systems thinker, and mental health leader who has worked across public health, digital care, and clinical training. Her approach connects care, context, and scale—exactly the kind of thinking we want to anchor our future conversations at TinT.
We’re lucky to have her on board!
Thought of someone while reading this edition? Share it with them!
💬 Connect with me, Harshali on LinkedIn
📬 Subscribe to the newsletter here if you’re reading this as a free preview,
🔁 And pass it along if it sparked something, it helps more than you know.
Harshali
Founder, TinT