"Your newsletter felt like someone holding my hand through an unfamiliar field. It had none of the jargon-heavy complexity I brace for in most tech newsletters—just clarity, warmth, and flow."
Share
#26 | How To Pick Between Scale vs Depth In Mental Health Innovation
How To Pick Between Scale vs Depth In Mental Health Innovation
Hello friends,
I sit at my desk, a ginger lemon tea in tow as I once again bring the TinT newsletter to your inbox.
In the eight months of TinT, we’ve published 25 deeply researched newsletters, hosted one machine learning & mental health event and conducted an AI Psychoeducation workshop with trainee therapists at a reputed Indian educational institute.
We’ve also attracted all the right attention and blossomed into a small, multi-disciplinary team! To be so generously supported by all you readers is an honour – and a responsibility that I do not take lightly.
Along with your appreciation you've shared your feedback. As a response to that, this winter we’ve been underground, designing and developing our next steps. In the coming months, TinT will be focused on:
AI Psychoeducation and AI literacy – for those of you eager to develop an understanding of how AI intercepts relationships and guides your clients into being mindful consumers of technology
Critical Product Thinking & Building – for those of you who wish to apply yourselves toward building technology for mental health innovation.
Should you be more specifically interested in either or both of our above directions, reply to this email with the words AI PSYCHED and/or CRITICAL PRODUCT, and I shall get in touch with you.
Moving forward, you will find us in your inbox 2-3 times a month. As always, this newsletter will cover a wide gamut of topics and will continue to make you a technology-informed therapist.
Personally, TinT has been a dream in the making for the past 8 years. My 2019 vision board has ‘start something in the cross hairs of mental health and tech’ scribbled on it. Thank you for being a part of my story, our story, and the story of bridging two disciplines – machine learning and mental health.
With that, join me as we kickstart TinT’s 2026! :)
An Age-Old Argument: Scale Vs Depth
Should the industry lean into producing more tools and therapists as fast as possible, or insist on more evidence, better training, and careful standards before scaling?
Hark back to the time you first came across mental health innovation – the scale vs depth conundrum is perhaps one of the early dilemmas you might have confronted.
A Case For Scale
Circumstances that lead to Scale Mindset
There are 0.3 psychiatrists for every 100,000 persons in India. The numbers don’t get any better in the rest of Asian countries. [1][2]
The person-to-provider ratio is extremely skewed in most eastern hemisphere countries. A tad better, but nowhere close to ideal in the Western hemisphere countries either.
Even in well‑resourced settings, a large proportion (50% in US; 70-92% in India) of people who need care never reach a qualified professional, and traditional tele-health has plateaued because it still depends on limited clinician time. [3][4][5]
Problem areas that get tackled at scale
Scalers are often software tool builders since technology is the fastest means of distribution at scale. They tackle the access problem: millions are either on waiting lists, priced out, or living where specialists are absent.
Scalers argue that asynchronous tools, generative‑AI chatbots, and global platforms can extend some form of support to far more people, much faster than workforce expansion.
At scale, tools are usually positioned as self‑help or early support rather than formal treatment since user data, A/B testing, and agile product cycles take precedence over slow controlled trials.
Risks and benefits of solving for scale
There is some merit in the scale perspective. To an extent, solutions of scale provide meaningful symptoms relief for thousands or a few millions of people. Access beyond logistical and geographical constraints means care truly gets democratised. [6][7]
But the wins come with a catch.
MhTech tools that scale fast struggle to maintain retention. In simple words, tools too see inconsistent user engagement and dropouts leading to little impact. In the “scale now, refine later” mindset, guardrails for safety become an afterthought – a reaction to a tragic incident, or when use-cases become too big to ignore.
A Case for Depth
Circumstances that lead to Depth Mindset
Where scaling becomes risky, depth becomes important.
Think of how CBT core belief templates still require clinicians to know the client’s context: right use, right situation.
The argument for depth is that often, AI mental‑health tools lack robust, replicated evidence that algorithmic bias and stigma is not baked into models trained on social media data, and that LLM-based tools never give unsafe or misleading responses in high-risk contexts. [8]
Simply put, often MhTech tools are built on top of carefully prompted LLM models, which in turn are trained on massive Internet and social‑media data – meaning they inherit the stigma, stereotypes, and misinformation about mental illness from society while remaining opaque [9] and apparently-confident [10].
It is extremely difficult to prove that such tools will never mislead, misguide, or endanger the safety of their users, especially in high-risk contexts. This makes the case for employing the clinician’s mindset, for taking it slow and diving in deep.
Problem areas that get tackled at dept
Depthers work on complex problems that often have a Machine Learning and Deep Learning angle. This might include areas like safety, bias, explainability, privacy, security, replicability, and socio-cultural, socio-linguistic aspects of MhTech tools.
Academic teams, AI‑safety and clinical researchers are often depthers, who systematically test LLM‑based “therapists” for unsafe responses to suicidality, psychosis, and trauma through edge-case testing.
Risks and benefits of solving for depth
Depth brings meticulous, science-backed approach beyond market claims; it focuses on quality of answers and better understanding of context of the user and the situation.
The main application-based advantage is ethicality. Solving for depth reduces the risk that untested tools will entrench stigma, worsen crises, or quietly fail entire subgroups while looking successful at the surface for a majority.
However, depth-oriented solutioning takes a hit at speed. Achieving RCTs and regulatory‑grade evidence might take years while millions remain untreated. Depth actors risk functioning in academic silos, where their works rarely translate into widely available services. Here we run the risk of conservativeness even, hesitating to experiment with new modalities when early evidence suggests potential, only handling it when it becomes too big to ignore.
Both Sides and the Middle | Compiled by Perplexity AI
Bridging Scale and Depth
Scale and Depth aren’t so binary. For effective implementation of AI in mental health, we need the rigour of depthers and accessibility of scalers.
Scale can power depth. Teams powered by depth run pragmatic or naturalistic trials, publish results, and then feed continuous outcome data back into product development and clinical design. It helps with slow expansion into diverse samples, longitudinal observations, and the right detection of harms.
Depth can power scale in turn. Involving clinicians and researchers in product teams from the start, rather than asking them to “clean up” after launch, ensures the depth of their perspective is preserved.
What can you do?
The Key Design Principle: Co-Design
Co‑design with service users and clinicians.
This goes for everyone – technology users, builders, clinicians, and trainees.
Be it evaluation of equity and bias, or trials planned in deployment cycles, or outcome dashboards that are shared with both providers and users — the key here is to be intentional in co-designing mental health technology.
For practising psychologists, participating in co‑design, helping define safe use cases, contributing to trial design and outcome selection, and advocating for regulation (evidence + monitored innovation) are ways in which to bring your depth to power scale in mental health innovation.
For students and trainee-psychologists, test it yourself! Develop a baseline understanding of how technology fuels mental healthcare industry, develop a critical perspective and a taste of what good looks like. View study designs, engagement metrics, and outcome evaluation in product reports, and beyond that take on small projects.
For founders/ operators, employ hybrid models in product building where generative AI supports psychoeducation, documentation, triage, or between‑session practice, while licensed clinicians handle formulation, high‑risk decisions, and relational work.
For technologists, treat MH as high-stakes, where co-design is intentional, not an afterthought. Read guidelines for safety work by NIMH/ WHO/ APA. Build architecture with sound explainability of output, crisis escalation logic, and equity metrics for treating all users fairly. Look to partner with already established companies or clinicians on projects. And you can’t find them, reach out to us and we’ll connect you!
TinT's Thoughts
AI will neither rescue mental health systems nor solve the global mental health crisis. We don't have a choice between scale vs depth, rather we must make a bridge.
The work now is to shape how scale and depth inform each other. That starts with collaborations and interdependence (that is why we’re building the TinT’s community too!).
On one end, scale lures to fill the treatment gaps at the surface; on the other depth promises quality care before implementing. At the end, we’ll all have to meet like we always do: somewhere in the middle? 😉 (cheesy therapy talk but couldn’t resist)
This means that increasingly clinical professionals who know technology and adjacent fields will be in demand. Those who have knowledge of how both clinical and tech sides work, and function well in disciplinary teams rather than as individual players, will be in high demand.
Take care and see you soon, Harshali Founder, TinT Follow on Insta @be_tint Connect with me, Harshali on LinkedIn Check out more resources on the website
I know you're enjoying this newsletter – most of you are reading right up till the end; analytics don't lie. Do me a favour and share it with a friend or in the team chat and tell them to sign up? The difference between a therapist and a solopreneur is simply the ability to generate multiple streams of income.
Therapists deserve clarity on AI! TinT delivers insights and community to help you stay grounded and ahead
"Your newsletter felt like someone holding my hand through an unfamiliar field. It had none of the jargon-heavy complexity I brace for in most tech newsletters—just clarity, warmth, and flow."
5 min read@be_tintwebsite How To Open The AI Can Of Worms With Clients – Part 1 of 2 Hello dear reader, I’m back again at my desk with a mug of warm haldi doodh – turmeric latte for friends who didn’t grow up with cringe Indian kids associate with the beverage – settling into my usual writing posture for today’s newsletter. Except today's edition is unlike anything we've published before. Most of what TinT publishes lives in the tech, law, or business layer of mental health innovation. We’ve...
7 min read@be_tintwebsite What Therapists Need to Know About Data Privacy in Mental Health AI Hello dear reader, Confidentiality has always been one of the cornerstones of the therapeutic relationship. Clinical practice evolved around the architecture of closed doors, quiet rooms, and deeply held secrets. But increasingly, some parts of therapy live inside software systems. Notes are typed into digital platforms. Assessments are completed through apps. AI models are trained on patterns of...
5 min read@be_tintwebsite #28 | Innovation From Clinicians – Part II Hello dear reader, The days seem long and yet the weeks pass by too quickly as we enter the third month of 2026. I slot an hour on the last day of every month to reflect upon my journey of building TinT and acknowledge the distance travelled. In the 10 months that TinT has been running, the most memorable glimmers have been moments when we've crossed paths with clinicians who tinker with making and building. This edition...