Hacks: The 3-Sentence Prompt That Turns Any AI Into Your Editor
Most people use AI to write from scratch. The higher-ROI move is using it to edit what you already wrote. One prompt pattern that works across Claude, ChatGPT, and Gemini.
You wrote the thing. That was the hard part — supposedly. You sat down, pushed through the blank page, and now there are seven hundred words on the screen that didn't exist an hour ago. A blog post, maybe. A newsletter. A cover letter. A speech for your best friend's wedding.
And now you're staring at it, knowing something's off, unable to say what.
This is where most people make the same mistake. They highlight the whole document, paste it into an AI, and type some version of "make this better." The AI obliges. It returns a gleaming, soulless rewrite that sounds like it was dictated by a customer-service training manual. Every rough edge filed smooth. Every odd little phrase — the ones that actually sounded like you — replaced with something blander and more "professional."
You read it. You hate it. You close the tab. The original draft sits untouched on your desktop for three more days.
There's a better way. It takes three sentences.
The prompt
Here it is, the whole thing:
“Here is my draft: [paste your text]. My audience is [who will read this]. The one thing this piece must accomplish is [specific goal]. Please mark — don't rewrite — the three weakest sentences and explain, in one line each, why they're weak.
That's it. Copy it. Save it somewhere. Tape it to your monitor if that's your style. It works in Claude, ChatGPT, and Gemini — I tested all three — and it changes what AI editing feels like from "hand your essay to a stranger who rewrites it" to "sit next to a sharp friend who points at the problems and lets you fix them."
Three moving parts. Each one does something specific. Let's break them down.
Part one: "My audience is…"
Most people skip this, and it's the single biggest reason AI feedback feels generic. When you don't tell the AI who's reading, it defaults to a phantom: some featureless, mildly educated, vaguely professional adult who wants everything to sound like a LinkedIn post. That's nobody. That's not your reader.
Your reader is specific. Your reader is a hiring manager scanning fifty cover letters during a layover at O'Hare. Your reader is your sister-in-law who will hear this toast at a rehearsal dinner after two glasses of champagne. Your reader is a PTA board that has eleven other agenda items and thirty seconds of patience for your email.
When you name the audience, the AI stops optimizing for "good writing" in the abstract and starts optimizing for reach — will this sentence land with that particular human? The feedback changes completely. Generic advice like "this sentence is a bit wordy" becomes pointed advice like "your hiring manager won't read past the second comma here — they're skimming."
Try it with the same draft twice. Once, leave the audience blank. Once, write "my audience is a room of fifty skeptical high-school juniors who'd rather be on their phones." Watch how different the three marked sentences are.
Part two: "The one thing this piece must accomplish is…"
This is the constraint that does the heaviest lifting, and it's the one that feels most uncomfortable to write. Because it forces you to decide.
Not "the three things." Not "the main themes." The one thing.
A cover letter's one thing might be: convince the reader I can do this job despite having no direct experience in the field. A wedding toast's one thing might be: make my best friend cry exactly once, in a good way. A blog post's one thing might be: leave the reader knowing how to negotiate a freelance rate by the time they finish the last paragraph.
When you hand that single constraint to the AI, it reads your draft through a lens. It stops caring whether your prose is "tight" or "polished" or "engaging" — words that mean nothing without a goal behind them — and starts caring whether each sentence pushes toward the one thing or drifts away from it. The three sentences it flags will almost always be the ones where you lost the thread. The ones where you got self-conscious, or tried to sound smart, or forgot who you were talking to.
You'll know the AI got it right because you'll feel a small, annoying flash of recognition. Yeah. I knew that sentence was the problem. I just didn't want to admit it.
Part three: "Mark — don't rewrite"
This is the crucial bit. The verb that saves everything.
"Rewrite" tells the AI to take over. "Mark" tells it to point. The difference is the difference between a surgeon who operates on you while you're unconscious and a physical therapist who says "that's the muscle — feel it? — now you move it."
When an AI rewrites, you learn nothing. You get a finished product that's technically fine and emotionally dead. When an AI marks, you get a diagnosis. Three sentences, circled in red, with a one-line explanation each. Then it's on you. You're the one who decides how to fix them — or whether to fix them at all. Maybe the AI flagged a sentence you love. Great. Keep it. You're the writer. But now you're keeping it on purpose, not out of inertia.
The "don't rewrite" instruction also prevents the most common AI-editing failure mode: style drift. Left to its own devices, every AI will slowly nudge your prose toward the same tepid center. Your weird, angular, slightly-too-casual voice gets rounded into something that could appear in any corporate memo. The "mark, don't rewrite" constraint keeps your voice intact. The AI identifies the weak spots. You do the surgery.
What this looks like in practice
Here's a real before-and-after. Say you're writing a short email to parents about a school fundraiser, and your draft looks like this:
“Hi everyone! I wanted to reach out and let you know about an exciting opportunity coming up. Our annual spring fundraiser is going to be on May 10th and we have a lot of amazing things planned. It would be great to see as many families there as possible. Please let me know if you have any questions!
You paste it in with the three-sentence prompt. Audience: parents who get fifteen school emails a week and delete most of them. One thing: get them to mark May 10th on their calendar right now.
Here's what Claude returned:
“Sentence 1: "I wanted to reach out and let you know about an exciting opportunity coming up." — This is a preamble that delays the point; parents scanning on their phone will stop reading here.>
“Sentence 2: "We have a lot of amazing things planned." — Vague; gives no reason to show up. A skimming parent needs one concrete detail to anchor a memory.>
“Sentence 3: "Please let me know if you have any questions!" — Passive close. The goal is to get a date marked, not to invite questions.
Now you know what to fix. And the fixes write themselves. Instead of "I wanted to reach out," you open with the date. Instead of "a lot of amazing things," you name one — the dunk tank, the silent auction, whatever makes a kid tug on a parent's sleeve. Instead of "let me know if you have any questions," you close with "mark May 10th — here's the calendar link."
The rewritten version is yours. It still sounds like you. It just doesn't have three dead sentences in the middle anymore.
I ran the same fundraiser draft through ChatGPT and Gemini with identical prompts. ChatGPT flagged sentences one and three (same as Claude) but swapped sentence two for "Our annual spring fundraiser is going to be on May 10th" — arguing the passive phrasing buried the date. Gemini flagged all three of Claude's picks but added a note that the subject line (which I hadn't included) was doing no work either. All three models agreed on the core diagnosis: the email delays its own point and gives the reader no concrete reason to act.
That's what you want. Not identical output. Convergent diagnosis. When three different models point at the same weak spots, those spots are genuinely weak.
Variations that work
Once you have the base prompt memorized, you can twist it for different situations:
For longer pieces (over 1,500 words), change "three weakest sentences" to "five weakest sentences." More real estate means more places for the thread to slip.
For pieces where tone matters more than argument — toasts, eulogies, personal letters — change "weakest" to "most tonally inconsistent." The AI will look for the sentences where your voice cracks into formality or drops into cliché.
For professional writing — cover letters, grant proposals, reports — add a fourth sentence to the prompt: "Assume the reader will spend no more than ninety seconds on this." That time constraint sharpens every piece of feedback.
For creative writing, try: "Mark the three sentences where you can feel the writer's self-consciousness." That particular phrasing — the writer's self-consciousness — produces eerily good results. The AI finds the sentences where you hedged, over-explained, or softened an image because you were afraid it wouldn't land.
The tools that pair with this
If you want to go deeper than a single prompt, a-gnt has a few things built for exactly this kind of editing work.
📖The Draft Reader is a soul — an AI persona you can have an ongoing conversation with — designed to read rough drafts without flinching. Hand it your raw pages and it tells you what actually needs to change, in a voice that's honest without being unkind. Think of it as the three-sentence prompt extended into a full editorial relationship.
✍️The Writing Feedback Coach approaches the same problem from a teaching angle. Instead of just marking what's weak, it explains the pattern behind the weakness — why you keep burying your lede, why your transitions collapse in the middle third, why your endings trail off instead of landing. It treats you like an adult learner, not a grade to fix.
✍️The Plain-Spoken Copy Editor is the blunter version. It kills your darlings before you have to. Less coaching, more scalpel. If your draft is nearly done and you need someone to cut the last fifteen percent of fat, that's the one.
And for the specific case of email — the genre where most of us do our worst writing — ✉️Email Polish rewrites emails to be clearer, kinder, and shorter without sounding fake. Sometimes you don't want to mark and fix; sometimes you just want the reply sent before lunch. That's fair. Use the right tool for the moment.
Why this works (the boring reason)
Large language models are better at diagnosis than treatment. That's not a flaw — it's an architectural reality. When you ask an AI to identify a problem in a piece of writing, it's doing something it's genuinely good at: pattern-matching against millions of examples of strong and weak prose. When you ask it to rewrite, it's doing something much harder: generating new text that matches your voice, your intent, and your audience simultaneously. It can do it, but the output is always a compromise — an average of all the voices it's ever seen, pushed gently toward yours.
The three-sentence prompt exploits the strength and sidesteps the weakness. You get the AI's best skill (diagnosis) and keep your own best skill (your voice). It's a division of labor that actually makes sense.
The sixty-second test
Open whatever AI you use. Paste in something you've written recently — an email, a post, a paragraph from a report. Fill in the three blanks:
“Here is my draft: [your text]. My audience is [be specific — not "professionals," not "readers," a real group of real humans with real constraints on their attention]. The one thing this piece must accomplish is [one thing, not three, not five, one]. Please mark — don't rewrite — the three weakest sentences and explain, in one line each, why they're weak.
Send it. Read what comes back. If you feel that small flash of annoyed recognition — yeah, I knew that sentence was the problem — the prompt did its job.
Now fix the three sentences yourself. Read the whole thing again.
It's better, isn't it. And it still sounds like you.
Ratings & Reviews
0.0
out of 5
0 ratings
No reviews yet. Be the first to share your experience.
Tools in this post
Email Polish
Rewrite emails to be clearer, kinder, and shorter — without sounding fake.
The Draft Reader
Reads your rough draft without flinching and tells you what actually needs to change
The Plain-Spoken Copy Editor
Kills your darlings before you have to
The Writing Feedback Coach
Honest paper feedback that treats you like an adult learner, not a grade to fix