#9 | in Short: Prompt Engineering


3 min read
Website

9 | In Short: Prompt Engineering

It's time for you mid-week In-Short! Your weekly intro to tech concepts without jargon.

Amidst the swirl of AI terms, we hope we haven’t lost you!

Here’s a recap of what we’ve explored so far through our analogies rooted in your daily world.

LLM = The New Intern [read In Short]

An LLM (Large Language Model) is an intern who has read every psychology textbook, every DSM revision, all the research papers you can think of, and thousands of therapy transcripts. They’re trained, but they’ve never met a single client. They’re generalists.

Fine-tuning = Your Clinic’s Supervision [read In Short]

Now, you want this intern to work well in your clinic. You teach them how you work, the values you hold, and what your therapeutic language looks like. Over time, they get better at tailoring their work to your specific context. That’s fine-tuning -training a general model for a specific setting.

Prompt Engineering = How You Talk to Them

Even with the best intern, if your instructions are vague, you’ll get vague work. But if you’re clear about your tone, structure, expectations, they’ll show up better. That’s what prompt engineering is all about. Learning to speak to your intern (the model) clearly to get the best out of them.

Which brings us to today’s theme.

What Is Prompt Engineering?

Before we begin: this isn’t programming. You don’t need to code. Prompt engineering is linguistic and relational, not just technical.

It’s the practice of crafting effective instructions for an AI tool so that it gives you relevant, nuanced, emotionally intelligent responses.

What you ask shapes what you get.

If you say:

“Write something about anxiety.” → You’ll get a generic summary.

But if you say:

“You are a primary school teacher. Write a 3-paragraph psycho-educational note explaining anxiety to 8-10 year olds using metaphors, stories or references from the Harry Potter books. Do not pathologize, or try to diagnose, do not include pop-culture psychology terms.” → You’ll get something specific and potentially delightful.

But.. How Exactly Does This Happen?

Here’s what an LLM does in milliseconds when you type a prompt:

1. Tokenization: Breaking Words into Chunks

Your prompt is broken into tokens: not full words, but pieces of words.

E.g., “Therapist” → “Ther,” “apis,” “t.”

It’s like breaking down a complex sentence to find the emotional core underneath.

2. Embeddings: Making Meaning Maps

Next, those tokens are turned into embeddings: mathematical representations of meaning.

Imagine a vast, multidimensional landscape where each word is a glowing point connected by invisible threads of similarity. “Anxiety” clusters near “worry,” “unease,” and “nervousness,” forming a dense, shadowy neighbourhood — while “playfulness” floats far away in a bright, airy region filled with lighter emotions.

This is how the model begins to grasp nuance, tone, and relationships.

3. Prediction: One Token at a Time

Then comes prediction. The model writes your output one token at a time, asking itself repeatedly:

“Given everything known so far, what’s the most likely next token?”

The LLM automates sentence construction, and within it a probable logic, structure, and meaning.

Change a few words in your prompt, and you’ll nudge the model into an entirely different direction.

Why Therapists Are Great at Prompt Engineering

Prompt engineering isn’t a leap—it’s a shift. Therapists already have the instincts.

  • Clarity – You know how to be precise in your questions and goals.
  • Empathy – You adjust tone and phrasing based on the person in front of you.
  • Curiosity – You explore, reframe, and rephrase all the time.
  • Articulation – You are specific and intentional with the use of your words.

Therapists guide people to think, or not think, in specific directions everyday. Prompting a machine to “think” a certain way is new - and intriguing. Give it a shot!

And if you haven’t already, our weekend read will set you up for it.

In Short:

Prompt engineering is the art of communicating effectively with AI - think of it like guiding a new intern: the clearer your instructions, the better the outcome.

It’s not coding, its conversation. When you write a prompt, the AI breaks it into tokens (chunks of text), maps them to meaning using embeddings, and then predicts a response word by word.

The way you phrase things can dramatically change the output.

Prompt engineering is a natural extension of a therapist’s existing skills: empathy, precision, curiosity, and articulation—now applied to machines.

Help us sustain this effort

💛 If you learned something new via TinT, consider supporting us.

We’re committed to keeping TinT independent (no ads, no sponsors!) because real learning deserves a space free from sales pitches. Your one-time contribution helps make that possible.


Thanks for reading In Short! If you found this helpful, share it with a colleague who's learning about AI, just like you.

💬 Connect with me, Harshali on LinkedIn
📬 Subscribe to the newsletter here if you’re reading this as a free preview,
🔁 And pass it along if it sparked something, it helps more than you know.

See you this weekend for the long(er) read!
Harshali
Founder, TinT

W Mifflin St, Madison, WI 53703
Unsubscribe · Preferences

The Technology Informed Therapist

"Your newsletter felt like someone holding my hand through an unfamiliar field. It had none of the jargon-heavy complexity I brace for in most tech newsletters—just clarity, warmth, and flow."

Read more from The Technology Informed Therapist

4 min readWebsite Guiding Principles for MH-AI Founders & Builders Hello dear reader, After many many boxes, bags, and a city change, we’re so back! In case you missed it, I’m now living in Madison (Wisconsin), the charming isthmus city and capital of the dairy state. Translation: I fully intend to eat all the ice cream there is. For now, I’m sipping warm honey water, thrilled to be back at writing this newsletter! Today we continue our TinT Labs five-part special series, co-written with two...

Website A glitch in the matrix Dear reader, Despite my best efforts, there won’t be a TinT newsletter today. After a wonderful summer in Seattle, I moved this week to Madison, WI, where my husband is pursuing his PhD. In the rush of managing luggage and boxes, I underestimated how much time it would take to settle in and find my rhythm. As a first-time content creator, this has been a humbling lesson in scheduling. To everyone who shows up online with consistency, I see you! Your discipline...

Website #15 | Building AI-Resilient Therapy: TinT’s Next Chapter Hello dear reader, We break our usual weekly programming to bring to you an exciting announcement! Over the last few months of writing and research for this newsletter, one theme has surfaced again and again:Tech literacy builds AI-resilient therapy practices. A practice strengthened by an understanding how algorithms are built and how they shape real lives. We’ve been mapping the incredible skills clinicians are developing to...