It's mid-week, which means it's time for In Short—a quick dive into one machine learning term. Come weekend, we'll explore this term's real-world impact and why it matters to your work.
LL Who Now?
LLMs are a type of foundation model built specifically for working with language. That means that they are trained on huge amounts of text - transcripts, blogs, papers, social media, and every form of digital text you could imagine. They’re designed to do language tasks: write, summarise, respond, translate, and chat.
About to look up the meaning of foundation models? Read the last the last
In Short explainer instead
All LLMs are foundation models designed to process and generate human-like text, but not all foundation models are LLMs (some can also handle images, audio, or code).
When you ask an LLM something - say,
“How do I phrase this for a client feeling overwhelmed?”
It predicts a likely response based on patterns seen in similar texts. It doesn’t think, feel, or understand. It’s just guessing the next most probable word, based on its training.
LLMs are like high-powered language mirrors reflecting how humans write and speak, often with impressive fluency, but without human insight.
Take a Technical Plunge
LLMs are built on a neural network architecture called transformers, which uses a mechanism called self-attention to track how words relate to each other, not just nearby, but across paragraphs. This is what makes their responses feel context-aware and emotionally fluent.
They don’t read full sentences like we do. They break everything into tokens – think tiny word chunks (e.g., “therapist” might become “ther,” “ap,” “ist”). Each model has a token limit, which affects how much of your input it can "remember" at once.
The catch is... LLMs can hallucinate - making up facts, studies, or confident-sounding advice that isn’t real. And because they’re trained on general internet text, they reflect common biases, and might lack nuance around trauma, ethics, or culturally informed care, unless specifically fine-tuned.
Here’s rises the need to fine-tune the LLMs. Fine-tuning means training the model further on specialised data like therapy transcripts, clinical notes, or tone-sensitive material. It’s more expensive and used by companies building dedicated tools (e.g., Wysa, Woebot). This helps the model adopt more clinically appropriate language and structure.
Another way of doing it as an informed user is Prompt Engineering. You too can do preliminary prompt engineering on the LLM you use: change how you ask a question.
For example, Instead of: “Summarize this session.”
Try: “Summarize this session in a trauma-informed and strengths-based tone.” The better the prompt, the better the model’s response.
At its core, an LLM is just two files, a parameters file and a code file that runs the parameters. While they can have billions of parameters and have many possible uses, you can customize your foundation model for domain specific tasks through fine tuning.
In Short:
LLMs are AI models trained on huge amounts of text to predict the next word in a sentence. They power tools like ChatGPT, Wysa, and AI notetakers. Built on a neural network called the Transformer, they use self-attention to understand word relationships across long texts, making their responses sound coherent and context-aware.
LLMs don’t think or understand — they mirror patterns in human language. They can be incredibly helpful for summarising, writing, brainstorming, or rephrasing, but they also carry biases, can hallucinate facts, and aren’t trauma-informed.
They’re already shaping tools therapists use — from chatbots to note generators — and they raise important privacy and ethical considerations, especially if used with sensitive client data. We'll dive into these in the next edition of TinT!
That's all for today, friends. See you over the weekend!
💬 Connect with me, Harshali on LinkedIn
📬 Subscribe to the newsletter here if you’re reading this as a free preview,
🔁 Share with a friend, we need more tech informed therapists!
Warmly,
Harshali
Founder, TinT