#5 | In Short: Large Language Models (LLMs)


3 min read
Website

#5 | In Short: Large Language Models (LLMs)

It's mid-week, which means it's time for In Short—a quick dive into one machine learning term. Come weekend, we'll explore this term's real-world impact and why it matters to your work.

LL Who Now?

LLMs are a type of foundation model built specifically for working with language. That means that they are trained on huge amounts of text - transcripts, blogs, papers, social media, and every form of digital text you could imagine. They’re designed to do language tasks: write, summarise, respond, translate, and chat.

About to look up the meaning of foundation models? Read the last the last In Short explainer instead

All LLMs are foundation models designed to process and generate human-like text, but not all foundation models are LLMs (some can also handle images, audio, or code).

When you ask an LLM something - say,

“How do I phrase this for a client feeling overwhelmed?”

It predicts a likely response based on patterns seen in similar texts. It doesn’t think, feel, or understand. It’s just guessing the next most probable word, based on its training.

LLMs are like high-powered language mirrors reflecting how humans write and speak, often with impressive fluency, but without human insight.

Take a Technical Plunge

LLMs are built on a neural network architecture called transformers, which uses a mechanism called self-attention to track how words relate to each other, not just nearby, but across paragraphs. This is what makes their responses feel context-aware and emotionally fluent.

They don’t read full sentences like we do. They break everything into tokens – think tiny word chunks (e.g., “therapist” might become “ther,” “ap,” “ist”). Each model has a token limit, which affects how much of your input it can "remember" at once.

The catch is... LLMs can hallucinate - making up facts, studies, or confident-sounding advice that isn’t real. And because they’re trained on general internet text, they reflect common biases, and might lack nuance around trauma, ethics, or culturally informed care, unless specifically fine-tuned.

Here’s rises the need to fine-tune the LLMs. Fine-tuning means training the model further on specialised data like therapy transcripts, clinical notes, or tone-sensitive material. It’s more expensive and used by companies building dedicated tools (e.g., Wysa, Woebot). This helps the model adopt more clinically appropriate language and structure.

Another way of doing it as an informed user is Prompt Engineering. You too can do preliminary prompt engineering on the LLM you use: change how you ask a question.

For example, Instead of: “Summarize this session.”

Try: “Summarize this session in a trauma-informed and strengths-based tone.” The better the prompt, the better the model’s response.

At its core, an LLM is just two files, a parameters file and a code file that runs the parameters. While they can have billions of parameters and have many possible uses, you can customize your foundation model for domain specific tasks through fine tuning.

In Short:

LLMs are AI models trained on huge amounts of text to predict the next word in a sentence. They power tools like ChatGPT, Wysa, and AI notetakers. Built on a neural network called the Transformer, they use self-attention to understand word relationships across long texts, making their responses sound coherent and context-aware.

LLMs don’t think or understand — they mirror patterns in human language. They can be incredibly helpful for summarising, writing, brainstorming, or rephrasing, but they also carry biases, can hallucinate facts, and aren’t trauma-informed.

They’re already shaping tools therapists use — from chatbots to note generators — and they raise important privacy and ethical considerations, especially if used with sensitive client data. We'll dive into these in the next edition of TinT!


That's all for today, friends. See you over the weekend!

💬 Connect with me, Harshali on LinkedIn
📬 Subscribe to the newsletter here if you’re reading this as a free preview,
🔁 Share with a friend, we need more tech informed therapists!


Warmly,
Harshali
Founder, TinT

W Mifflin St, Madison, WI 53703
Unsubscribe · Preferences

The Technology Informed Therapist

"Your newsletter felt like someone holding my hand through an unfamiliar field. It had none of the jargon-heavy complexity I brace for in most tech newsletters—just clarity, warmth, and flow."

Read more from The Technology Informed Therapist

4 min readWebsite Guiding Principles for MH-AI Founders & Builders Hello dear reader, After many many boxes, bags, and a city change, we’re so back! In case you missed it, I’m now living in Madison (Wisconsin), the charming isthmus city and capital of the dairy state. Translation: I fully intend to eat all the ice cream there is. For now, I’m sipping warm honey water, thrilled to be back at writing this newsletter! Today we continue our TinT Labs five-part special series, co-written with two...

Website A glitch in the matrix Dear reader, Despite my best efforts, there won’t be a TinT newsletter today. After a wonderful summer in Seattle, I moved this week to Madison, WI, where my husband is pursuing his PhD. In the rush of managing luggage and boxes, I underestimated how much time it would take to settle in and find my rhythm. As a first-time content creator, this has been a humbling lesson in scheduling. To everyone who shows up online with consistency, I see you! Your discipline...

Website #15 | Building AI-Resilient Therapy: TinT’s Next Chapter Hello dear reader, We break our usual weekly programming to bring to you an exciting announcement! Over the last few months of writing and research for this newsletter, one theme has surfaced again and again:Tech literacy builds AI-resilient therapy practices. A practice strengthened by an understanding how algorithms are built and how they shape real lives. We’ve been mapping the incredible skills clinicians are developing to...