How to Make Claude Write in Your Voice (Without Breaking It)

A practical method for calibrating Claude to your brand voice — tested in production, built around project files, a banned words list, and real examples.

April 27, 2026
9 min read
Tags
TutorialWordPressAI

How to Make Claude Write in Your Voice (Without Breaking It)

The first time I tried to get Claude to write in my voice, I did what most people do. I wrote a long system prompt. Don't use em-dashes. Don't start sentences with "In today's fast-paced world." Avoid corporate jargon. Be punchy but not too punchy. Don't sound like an AI. Don't sound like a LinkedIn influencer. Don't use the word "leverage." Or "unlock." Or "delve." Keep sentences varied. Use personal anecdotes. Be specific. Be concise. Be warm but not fake.

The output was terrible. Not wrong, exactly. Just strangled. Every paragraph sounded like it had been written by someone watching a supervisor through a two-way mirror. I kept adding rules to fix it, and each new rule made it worse.

Then I remembered Robocop 2.

The Robocop 2 problem

In the 1990 sequel RoboCop 2, Dr. Juliette Faxx gets control of RoboCop’s programming and uploads a long list of new directives—over 300 in total. Most of them are corporate PR nonsense designed to make him “safer" and more marketable: avoid language that could offend, refer emotional complaints to trained counselors, avoid negative attitudes, and uphold the image of OCP at all times. RoboCop becomes useless: he can barely function because every real police action conflicts with some new rule. The only way out is to electrocute himself, effectively wiping the directives and restoring his original programming.

That was my prompt. Three hundred directives. Nothing could happen because everything was forbidden.

The fix wasn't a better prompt. It was less prompt. Fewer rules, more examples.

If you're running content ops (writing newsletters, LinkedIn posts, product blogs, landing pages) and you want Claude (or any LLM) to sound like a human writer instead of a press release generator, this is the method that actually works. It's what I use for my own writing, and it's what I set up for clients who need AI-assisted content without the AI taste.

Why instructions alone don't work

LLMs learn faster from examples than from rules. This is the single most important thing to understand, and almost every "prompt engineering" guide skips it.

Tell Claude "write in a conversational tone" and it will produce something that sounds like a corporate brochure pretending to be a person. Show Claude five examples of how you actually write in a conversational tone, and it will copy the rhythm, the vocabulary, the pacing, and the quirks without you having to name any of them.

Rules tell the model what to avoid. Examples show the model what to do. You need some of both, but the ratio matters. My rough working number: one page of examples is worth about ten pages of instructions.

The setup: Claude Projects

Claude Projects are folders that live inside your account. Each project has its own knowledge files (reference documents Claude reads before responding) and its own system instructions. Every conversation inside that project starts with that context already loaded.

This is the piece most people skip. They open a new chat, paste a half-baked brief, and wonder why the output sounds generic. Of course it does. The model has nothing to work with.

Setting up a proper voice-calibration project takes maybe thirty minutes, and you only do it once. Here's what goes in it.

What to put in the knowledge files

Four things, in descending order of importance.

Published work that actually sounds like you. Not drafts. Not things you settled for. Pick eight to fifteen pieces you'd be happy to have someone else mistake for your best work. Blog posts, newsletters, LinkedIn posts, anything where the voice is locked in. Paste them into text files and upload them. Quality matters more than quantity, so it’s important to be picky here: ten great examples beat thirty mediocre ones.

A banned words list. More on this in a moment. This is the shortest file, but punches above its weight.

A voice notes document. Half a page, no more. Plain language. Things like: "I write in first person. I open with concrete scenes, not abstract claims. I avoid rhetorical questions in the first paragraph. I use short declarative closers. My Medium voice is more colloquial than my LinkedIn voice." Don't list fifty rules. List the five things you'd tell a new ghostwriter if you had to onboard them in five minutes, or how you would describe your style in an interview.

Reference pieces that show what to avoid. Optional but useful. One or two AI-written blog posts, clearly labeled as counter-examples. Claude will read them and use them as the negative pattern. This is more useful than it sounds.

The banned words list (the actual file)

Here's a sample from the list I use, lightly edited:

Hurdles
Bustling
Harnessing
Unveiling the power
Realm
Demystify
Insurmountable
New era
Poised
Unravel
Unleash
Delve
Enrich
Multifaceted
Elevate
Discover
Supercharge
Unlock
Tailored
Elegant
Dive
Ever-evolving
Meticulously
Grappling
Embark
Navigate
Journey

Why these specific words? They're not bad in every context. The problem is that LLMs reach for them reflexively, as they were trained to use these words to fake a “complexity". When Claude wants to sound impressive and doesn't know what to say, it picks up one of these words and gestures with it. "This unlocks new possibilities." "We delve into the multifaceted nature of the problem." "An ever-evolving landscape of tailored solutions." Nothing is being said, and the words are doing the work of hiding it.

Banning them forces the model to write around the empty spaces, which usually means it has to say something concrete to fill them. "This unlocks new possibilities" becomes "this lets your support team answer twice as many tickets," because there's no escape hatch.

Build your own list the way I built mine: every time you catch Claude doing the thing, add the word. After a month, you'll have forty or fifty words. After three months, you'll have a stable list that catches 90% of the AI-voice tells before they ship.

The system instructions (keep them short)

The instructions file is where Robocop 2 usually happens. People pile directive on directive until the model can't move. Don't do this.

Here's the structure I use, adapted per project:

  • Who the writing is for (specific audience, not "professionals")
  • What voice to match (reference the knowledge files: "match the register of the articles in this project")
  • Three or four things to avoid (not twenty)
  • One hard rule about structure if you have one (e.g. "don't open with rhetorical questions")
  • One line telling Claude to flag when it's unsure rather than guess

That's it. Five or six bullet points. If your instructions file is longer than half a page, it's too long, and the output will start to feel compressed.

A before-and-after example

Without calibration, ask Claude to write the opening of a blog post about team communication and you'll get something like this:

"In today's fast-paced workplace, effective communication has become more critical than ever. As teams become increasingly distributed, mastering the art of clear, empathetic dialogue can unlock new levels of productivity and collaboration. In this post, we'll explore the key pillars of modern team communication."

With a calibrated project (examples loaded, banned words in place, three short voice notes) the same prompt produces something closer to this:

"Last Tuesday, I watched a product team spend forty minutes in a meeting that should have been a two-line Slack message. Nobody wanted to be the person who said so. That's the real communication problem in most teams, and it's not solved by better tools."

Same prompt. Same model. Different context. The second version isn't magic, it's what happens when the model has enough reference material to skip the generic mode and start imitating an actual writer.

What this gets you at scale

If you're a solo marketer or running a small content operation, this is the difference between publishing three usable pieces a week and publishing thirty pieces that all sound the same and rank for nothing.

Google's recent updates have gotten better at detecting low-effort AI content, and they penalize it. Readers have gotten faster at smelling it too — you've probably clicked away from a post yourself this week because the first paragraph felt like a chatbot. Generic AI content isn't just a style problem anymore.

Voice calibration is the cheapest and simplest fix available. You don’t need to wait months for the new model that will revolutionize everything, just to keep delivering bad content. Do thirty minutes of setup, a text file or two, and your output stops looking like everyone else's output.

This is also part of why most AI integrations stall after the demo. The demo uses a perfect prompt on a perfect input. Production publishes ten pieces a week on ten different topics, and the voice falls apart by Wednesday. The fix isn't a smarter model. It's the context layer around the model.

I wrote more about the model-versus-context question in the overview post if you want the longer version of that argument.

Quick checklist

If you do nothing else, do this:

  • Create one Claude Project for your brand voice
  • Upload eight to fifteen of your best published pieces
  • Add a text file with fifteen to thirty banned words
  • Write a half-page voice notes document
  • Keep the system instructions under six bullet points
  • Every time Claude outputs something that sounds wrong, update one of those files instead of arguing with it in chat

Do this once and your content operation stops fighting the model.

If you want this set up for you

Most clients I work with aren't writers. They're founders or marketing leads who need AI-assisted content that doesn't embarrass them, and they don't have thirty hours to figure out what belongs in a calibration project. If that's you, I do this as part of AI integration work for teams — Claude Projects set up correctly, voice calibrated against your real content, banned word lists built from the specific mistakes your content has been making.

You can reach me at me@phalkmin.me or through the contact form. Send me two or three published pieces you like and I'll tell you honestly whether voice calibration is what you need or whether the problem is somewhere else.

And if you're setting this up yourself: start with the examples. The instructions can wait.

Read More Posts

Explore other articles and insights

Back to Blog

© 2026 Paulo H. Alkmin. All rights reserved.