Hello dear reader,
It’s Saturday evening. The sunset hues outside my windows are magnificent. My weekend writing rhythm has set in. Today’s piece is unlike any before.
This is the first time Tint feature’s a therapists’ own writing.
I met Daniel Fleshner a few months ago via LinkedIn.
I feel I should send the LinkedIn team a thank you note for all the meaningful connections LI has sparked for me this year.
Daniel is a therapist and technology consultant whose curiosity about AI and how it shapes clinical practice is both thoughtful and grounded.
In our conversations, what stood out to me was the balance in Daniel's perspective – on one hand, he was critically analysing innovation from a clinical lens.
On the other, he was absorbing all the movements of the industry, thinking like an innovator with a robust creative approach to the question: "So what next?"
His reflections in this essay resonated with me deeply, and I think they’ll do the same for you.
Where To Begin Thinking About AI
When people consider the intersection of AI and therapy, the natural inclination for many is to think of an AI therapist, but the reality is that AI disruption is more likely to come from the integration of AI tools into therapy and the broader behavioural health system.
These come with both risk and opportunity, and it’s helpful to take a nuanced look at how to use AI in the field in a responsible way to increase client access and outcomes to quality mental healthcare.
In this piece, I’ll explore what to look for in AI tools and when the risk isn’t worth the reward, and a few creative ways therapists can use AI.
Guardrails On Applications
While I hesitate to say any uses of AI should be outright banned, I do believe there should be significant guardrails around its use.
This is especially true while AI (and specifically AI in therapy) is in its infancy, because we don’t have data on its long-term impacts. From an ethical lens, here are some red flags that need guardrails:
- AI “therapists” that don’t have considerable risk assessment capability. This one is tough right now, because it’s very difficult to effectively test risk assessment prior to releasing it and potentially causing harm.
- Improper data collection. This isn’t unique to AI (looking at you BetterHelp), but the tech explosion in the mental health field has created some major issues around how to store client data in a way that’s HIPAA compliant and maintains confidentiality.
It also can be a litmus test for the type of company that makes the product. If they’re not transparent and thoughtful about how to store your data, do you really want to trust them with vulnerable emotions?
- Tools built without significant clinician input. If there weren’t therapists involved in the creation of the tool, I feel cautious about whether the tool is addressing an actual need rather than functioning as a solution looking for a problem.
Positive Signs To Look For
In contrast to the items indicating guardrails are necessary, there are some green flags to look out for that indicate the company and/or product can be worth exploring:
- The product seeks to work with therapists rather than circumventing or replacing them. This is often evident in the tone of marketing materials and is especially evident when impact is advertised above financials.
- There is a clear issue that the product is addressing. The issue can be systemic (i.e. trying to get better reimbursement rates) or individual (i.e. facilitating productive topics within a therapy session), but we should avoid AI integration for the sake of sounding modern.
- Clinicians support the tool. While clinicians aren’t a monolith, having a trusted therapist recommend a tool is going to correlate with its efficacy. Clinicians are also trained at looking out for ethical issues. This is in contrast to using something based on a fancy marketing pitch or from a place of desperation to feel better.
Creative Ways to Use AI
On a bit of a more fun note, I got curious about what ways AI can help augment therapy that’s already happening to improve outcomes.
Here are a few ideas that come to mind:
- A tool that allows users to journal or otherwise articulate their thoughts between sessions that subsequently culls them down to important nuggets to bring up during the next session. A potential expansion of this would be to allow therapists access (with permission) to their clients’ summaries to better prepare for sessions. I believe Grow Therapy is already working on something along these lines.
- A tool that provides feedback to clinicians after a session and offers suggestions about important items to address in subsequent sessions.
- A tool that suggests CBT/DBT homework for clients between sessions and helps hold them accountable for following through on it. A bonus part of this could be to connect to the client’s calendar to help them schedule times to do the homework.
These are relatively broad and unpolished ideas, but they’re meant to give an understanding of creative ways that AI can be used.
This is in contrast to the ones we’re currently seeing, which are predominantly AI notetakers for clinicians (those aren’t bad, but the market is flooded with them, and they don’t move the needle enough to substantially change client access and/or outcomes).
To Tie All Of This Together...
I encourage thoughtful exploration of AI products in the behavioural health field, and I encourage considerations around how we can deploy the more ambitious ones in a safer manner.
AI is coming for the field one way or the other, and we’re at a critical point where we need to be mindful about what that looks like.
Adopting a strategy of resisting all innovation takes away our seat at the table, and despite my concerns about certain AI tools out there, I’d rather be a part of the conversation.
About the author
Daniel Fleshner, based in Denver Colorado, is a Licensed Professional Counselor and AASECT Certified Sex Therapist, and the founder of Inflection Point Therapy.
He specialises in sex and relationship therapy and is deeply committed to expanding access to affordable care. His therapeutic approach is eclectic, integrating mindfulness-based behavioural therapy and person-centered modalities.
Daniel sees therapy as a constantly evolving field and believes that, when used responsibly, technology can expand access, improve outcomes, and drive innovation.
You can find Daniel’s insights over on his substack, The Disrupted Therapist, and you can contact him through his website.
If you found this piece interesting, perhaps share it with someone who's interested in the MhTech space?
Take care and see you next weekend,
Harshali
Founder, TinT
Connect with me on LinkedIn