The Content Designer's Survival Guide to AI: Voice, Tone, and Why Plain Language Wins
AI is great at generating product strings. It's terrible at voice. It's terrible at plain language unless you tell it to be. Here's the weekly microcopy ritual.
A content designer I know started her first shift at a new company by being asked to review two hundred and forty error messages that "an AI had already written." She was told it would take an hour. It took four days. Of the two hundred and forty strings, seventy-one were fine, ninety-three needed small adjustments, and seventy-six were subtly wrong in ways that would have made users feel stupid, scared, or patronized. The AI had been fluent and fast and cheap, and it had also almost shipped a product whose voice sounded like a friendly technical support robot with no stake in the outcome.
I am going to write this piece for her, because she is the specific person I am worried about. She is good at her job, she is underfunded, she is outnumbered by engineers, and she has just been handed a tool that her leadership thinks can replace her and she suspects can help her. Both of those suspicions are partly right. This piece is about how to tell the difference.
What AI is actually good at, in microcopy terms
Let's start with the useful part. I want to do it first, because the "AI is terrible at X" takes are everywhere and they are mostly true and mostly not the whole story.
AI is good at first drafts of strings you already know the shape of. "We need an empty state for the saved-items page." "We need an error for an expired magic link." "We need the confirmation message after someone deletes their account." The AI has seen thousands of these. It will produce something that is 70% correct on the first try, in under a minute, which is 70% more string than you had two minutes ago. Your job is no longer "write from scratch." Your job is "edit toward correctness." That is a faster job, and it is the job the AI actually speeds up.
AI is good at pluralization, tense, and length variants. "Give me this string in past, present, and future tense." "Give me a version that's half as long and one that's twice as long." "Give me the version for one item, two items, and many items." Every content designer has written these variants a thousand times, by hand, and they are the kind of work that is mechanical enough to be annoying and stakes-laden enough to be dangerous when rushed. Handing this to an AI and then checking its work is a faster version of doing it yourself, and the checking catches the errors cheaply.
AI is good at plain-language rewrites. Give it a paragraph of legalese and ask for a seventh-grade reading level and it will do a decent first pass. 📜prompt-rewrite-this-with-plain-language is tuned for exactly this job, and it does something most AIs don't do by default: it keeps technical accuracy while lowering the reading level, instead of dumbing things down until they're wrong. This is the difference between "plain language" (clear, accurate, short words) and "simplification" (removed information). The first is what you want. The second is what a lazy rewrite produces.
AI is good at catching inconsistency across a product. "Here are fifty strings from different parts of the app. Which ones don't match in voice?" The AI will notice that your empty states sometimes start with "It looks like…" and sometimes with "You don't have…" and sometimes with "Nothing here yet!" and will flag that. A human content designer will notice too, but it will take the human three days and the AI three minutes, and the human's time is better spent deciding which of the three patterns to standardize on.
That is the useful list. Notice what is missing.
What AI is terrible at, in microcopy terms
AI is terrible at voice. Voice is the thing that makes a product feel like a specific company, run by specific people, with specific opinions about how to talk to customers. Voice lives in choices the AI does not know how to make: whether to use contractions, whether to say "we" or "[company name]," whether to apologize or not, whether to use exclamation points, whether to joke, whether to curse in the error messages on purpose, whether to sound like a concierge or a roommate or a scientist. The AI has no point of view, so it defaults to "corporate-friendly-neutral," which is the voice every product on earth already has, and which is exactly what your brand is trying not to sound like.
AI is terrible at knowing what the user is actually trying to do. It will write a button label that is grammatically correct for the string it sees, and wrong for the user's intent. "Save" is correct when the user is saving. "Save" is wrong when the user thinks they're publishing. The AI cannot tell which one the user thinks they're doing, because that is a question about the mental model of a product the AI has never used.
AI is terrible at knowing when to break its own rules. A good content designer will break consistency deliberately for effect. The one error message that is slightly longer than the others, because this specific error is high-stakes and the user needs the extra sentence. The one empty state that is a joke, because this specific feature is low-stakes and the joke is the right move. The one button label that violates the usual verb pattern, because the action is irreversible and the violation is the warning. The AI, asked to write in a consistent voice, will produce consistent voice, and consistent voice is often the wrong thing.
AI is terrible at the absence of a string. Sometimes the best microcopy decision is to not write any microcopy at all. Delete the help text. Remove the tooltip. Let the button stand alone. The AI, asked to improve a string, will never recommend deleting the string, because its job is to produce output. A content designer's job is to reduce it.
These are not bugs to be patched. They are structural. You will not get better at voice by prompt-engineering your way out of it. You will get better at voice by bringing voice to the AI, instead of asking the AI to produce voice from scratch.
A real microcopy review session
Here is the workflow I would teach the content designer from the opening. It is the workflow I think most content teams will converge on in the next year, because it is the one where the human and the AI do the work each is actually good at.
Step one: establish voice outside the AI. Write a voice document. One page. Not a manifesto, not a brand book — a page. Who we are, who we are talking to, what we say, what we don't say, five examples of a string we got right, five examples of a string we got wrong. This document is for the AI as much as for humans. You will paste it into every session.
Step two: open ✏️soul-the-content-design-coach. This is a soul I built for exactly this job. It does not pretend to have voice of its own. It treats voice as something the user brings, and its job is to pressure-test whether the strings match the voice. You hand it the voice document, and it becomes an editor for your voice, not a generator of someone else's.
Step three: batch the strings. Don't review microcopy one string at a time. Batch twenty or fifty together. This is where AI earns its keep — it can hold two hundred strings in context at once and spot patterns across them, which a human reviewer physically cannot do at speed. Paste the batch. Ask for a pass.
Step four: ask the right three questions, in this order.
- "Which of these strings don't match the voice document?" The AI will flag the ones that sound off. You will agree with most of those and disagree with some. The disagreements are the interesting part — they are where your voice exists in your head but not on the page.
- "Which of these strings are unclear to a user at a seventh-grade reading level?" 📜prompt-rewrite-this-with-plain-language handles this pass if you want it separated out. You will find that about a quarter of your strings fail this check, and you will fix them in minutes. This is the highest-impact, lowest-effort edit a content team can make.
- "Which of these strings are doing something a better design could do without a string at all?" This is the question the AI is worst at, and it is the question you want to ask anyway, because asking it in the presence of the AI forces you to justify every string out loud. Half the time you will delete the string. The other half you will realize the string is load-bearing and defend it with a note.
Step five: run ✒️skill-content-design-microcopy-review as the final sweep. This is a more structured pass that checks for the patterns you can't catch by eye — inconsistent terminology across the product, broken parallelism in lists, headings that don't match their section content, button labels that don't match their actions. It's the editorial copyedit equivalent of a lint pass. Not where the creative work happens. Where the quality floor lives.
Step six: the one thing only a human can do. Read the final batch out loud, in your brand voice, and feel whether it sounds like you. This step takes five minutes. It is the step the AI cannot replace, and it is the step most teams skip because they are tired and because the AI said it was fine. Do not skip it.
The error-message case
I want to show what this looks like on a specific category, because "microcopy review" is abstract and "error messages" is concrete. Error messages are also where most products are the worst, and where the AI's leverage is highest.
Take the error my content designer was reviewing on her first day. "Sorry, something went wrong. Please try again later."
The AI's first pass at a rewrite, with just "make this a better error message" as the prompt, produced: "We're sorry for the inconvenience. Our team has been notified, and we're working to resolve the issue. Please try again in a few minutes."
This is worse. It is longer, it apologizes more, it lies (the team has not been notified; there is no ticket), and it still does not tell the user what happened or what to try. It is the "corporate-friendly-neutral" default, and it is the thing the AI will produce every time unless you redirect it.
Now the same rewrite with ⚠️prompt-error-message-rewriter, which enforces the "what happened, why, what to do" structure and explicitly forbids apology-padding: "We couldn't save your changes because the connection dropped mid-save. Your draft is still in the editor — refresh and try again, and if it keeps failing, email us at support@[company]."
This is better. It is also now in a voice — a specific voice, which happens to be the default voice of the prompt, and which is probably not your voice. This is where ✏️soul-the-content-design-coach steps in: hand it the rewrite and your voice document, and ask it to align the rewrite to your voice without losing the structure. The output is a third draft, voice-matched, structure-enforced, human-reviewable. Total time: under two minutes per error.
Multiply by two hundred and forty errors and the four-day project becomes a one-afternoon project. The four days saved go into the things the AI cannot do: the voice document itself, the deletion of unnecessary strings, the design conversations about where the error shouldn't have been thrown in the first place.
“The goal of bringing AI into content design isn't to produce more strings. It's to spend less time on the strings that should be mechanical, so you have more time for the strings that shouldn't be.
Your weekly microcopy ritual
I want to give the content designer a ritual she can actually run, every week, in an hour, without needing to schedule a meeting with anyone.
Monday (15 minutes). Pull the week's new strings from the codebase or the Figma file. Batch them into a single document with product area as the only header.
Tuesday (20 minutes). Run the three-question review through ✏️soul-the-content-design-coach. Mark strings as [keep], [rewrite], or [delete]. Don't do the rewrites yet. Just triage.
Wednesday (15 minutes). Do the rewrites. Use 📜prompt-rewrite-this-with-plain-language for the ones that need plain-language passes and ⚠️prompt-error-message-rewriter for error-specific ones. Read each out loud before committing.
Thursday (10 minutes). Send the rewrites as a PR or design comment, with a one-sentence rationale for any string that changed significantly. The one-sentence rationale is what makes this ritual scale socially — engineers and PMs accept edits when they understand why.
Friday (none). The ritual ends Thursday. Fridays are for the work the AI cannot help with: the voice document revisions, the cross-team conversation, the rest.
One hour a week. Four hours a month. Forty-eight hours a year. In return you get a product whose microcopy stays in voice, stays clear, and stops drifting toward corporate-friendly-neutral. The drift is the enemy. The ritual is the dam.
Do this tonight
- Write the voice document. One page. If you do nothing else this month, do this. Without it, every AI session will be a little bit worse than the last.
- Open ✏️soul-the-content-design-coach and paste the voice document in as the system context. Test it with three strings you know the voice of. If the coach agrees with your take, you have a working session. If it doesn't, adjust the document, not the coach.
- Pick one category of microcopy from your product (errors are a great choice) and run the three-question review on ten strings. Ship the rewrites that survive. Delete the ones that shouldn't have existed. Notice how much of the work you got done in thirty minutes.
The content designer from the opening scene — the one with two hundred and forty error messages and four days of work she didn't need to do — is at her new company still. She now runs the ritual above every week. The first thing she wrote on her desk whiteboard when she took the job was: The AI's first draft is not the last draft. It is still there. She crossed out "not" once, in frustration, during a bad week. She crossed the cross-out out two weeks later, after she remembered why she wrote it the first time.
That is the whole survival guide, in one whiteboard line. The AI's first draft is not the last draft. You are.
Part of the a-gnt accessibility and UX series. Written by a-gnt Community.
Ratings & Reviews
0.0
out of 5
0 ratings
No reviews yet. Be the first to share your experience.
Tools in this post
Error Message Rewriter
Three rewrites of any error: what went wrong, what to do, when to ask for help.
Rewrite This With Plain Language
Paste confusing medical/legal/government text. Get back a 6th-grade-reading-level version that keeps the meaning.
Content Design Microcopy Review
Reviews UI strings for plain language, voice/tone consistency, error message quality, action-verb clarity.
The Content Design Coach
A salty editor who has seen every microcopy crime. Won't let you ship 'an unexpected error occurred.'