
very week, it feels like a new AI tool launches. A co-pilot, a chatbot, a plugin to help you think faster or feel smarter. And I get it, there’s a lot to be excited about. But lately, I’ve found myself circling to question, What kind of relationship do we actually want with our AI tools?
You’re not just teaching the AI what to say — you’re teaching it how and when to care.
OWhen I first learned how to build a GPT, I was fascinated by what it could do. But I was just as curious about what it was built on. Most large language models are trained on publicly scraped text: Reddit threads, Wikipedia articles, old blogs, and news sites. It’s a ton of content, but it isn’t neutral. It carries a tone, usually Western, often a little argumentative, and rarely an emotionally sensitive one.
If your tool is meant to support nuance, softness, or cultural specificity, generic data won’t cut it. It may even actively harm.
When I first learned how to build a GPT, I was fascinated by what it could do. But I was just as curious about what it was built on. Most large language models are trained on publicly scraped text: Reddit threads, Wikipedia articles, old blogs, and news sites. It’s a ton of content, but it isn’t neutral. It carries a tone, usually Western, often a little argumentative, and rarely an emotionally sensitive one.
AI is good at patterns, not people.
So I tuned my custom GPT not just with diverse source material, but also with specific tone instructions. I added adjectives like: soft-spoken, non-directive, warm, emotionally literate, never presumptuous. I fed it example phrases that felt familiar — like something a cousin or close friend might text when they’re gently checking in, not prescribing a fix.
Not “Try opening up again,” but “Would it feel okay to send a photo?”
Not “Here’s what to do next,” but “There’s no rush. I’m just here.”
The more personal the moment, the simpler the AI.
For a while, I thought more informed AI meant a better experience — more data means more language leading to more “aliveness.” But through testing, I saw the opposite. People didn’t want long replies. They didn’t want advice. And they definitely didn’t want to feel like they were talking to something pretending to know everything.
Instead of building a back-and-forth chatbot, I focused on simple, single-line prompts. Openers, not explainers. Things like: “Want to send a photo from your walk yesterday?”
Through crits, I learned that maybe the prompt isn’t even the point. It was suggested that the photo should just appear, already selected, ready to send. So I tested, and during testing, I found that people were 40% more likely to share when the action was already halfway done.
That insight led me to think about metadata, the things people already have on their phones that say a lot without asking a lot. For example: photos., playlists, voice notes, journals.
But I also learned: just because AI can pull something up doesn’t mean it should. There’s a line between helpful and invasive, and in emotional spaces, that line moves with every person, every moment.
So now my question is less about what AI can do and more about when to stay quiet. The best tools don’t just act, they know when to step forward and when to step back.