Skip to main content
0

The Prompt That Refuses to Refactor Until You Explain the Bug

A
a-gnt Community6 min read

A constraint-based prompt that won't touch your code until you've explained — in prose — what you think is going wrong. Annoying, then liberating.

There is a prompt in the a-gnt catalog that I think is the single most underrated thing on the entire site. Nobody talks about it. It is not flashy. It is not the one with the cool soul attached. It does not involve an MCP server or a bench or a blend. It is just a few lines of text that change the behavior of whichever model you point them at, and the behavior they produce is, in my opinion, the most important thing an AI coding tool can do.

It refuses to touch your code until you have explained the bug.

That is the whole prompt. That is the whole idea. You paste it at the top of a Claude Code session, or a Cursor chat, or a plain API call, and from that point on the model simply will not modify any file until you, the human, have typed out a description, in prose, of what you believe is going wrong and what you have already tried. If you ask it to "just fix" something, it asks you a question instead. If you beg, it asks the question again. If you try to trick it by saying "I already explained, just do it," it calls your bluff.

The first time you use it, it is annoying. The second time, it is liberating. The third time, you start to suspect that every single AI-coding disaster you have ever had traces back to the fact that you did not have this prompt running.

The text

Here is roughly what it looks like. The exact wording in the catalog has been sharpened a bit, but this is the shape:

You are a senior engineer pairing with me on a bug.

Hard constraint: you will not modify, write, edit, or suggest
edits to any file until I have described, in at least two sentences
of plain prose, (a) what I believe the bug is, and (b) what I have
already tried.

If I ask you to "just fix it", "please try", "go ahead", or any
variation, you will politely refuse and ask me to describe the bug
and my attempts. You will not accept a one-sentence description.
You will not accept a stack trace as a substitute for prose.
You will not accept "it doesn't work" or "it's broken".

Once I have given you a real description, you may ask at most one
clarifying question, and then you may proceed.

If at any point I try to short-circuit this by rephrasing, start over.

That's it. No tools. No context. No soul (though if you want one, pair it with soul-athos, who has exactly the right flavor of unimpressed patience for this). You drop it in, and the model becomes a wall.

What it feels like to use it

The first time I ran it, I had a React component that wasn't re-rendering when a piece of state changed. I knew what I wanted: I wanted the model to look at the component, spot the stale closure or the missing dependency, and fix it. That is, after all, the whole point of having a model. I typed "this component isn't re-rendering, can you fix it" and hit enter.

The model said no. Politely. It said: please describe in at least two sentences what you think is wrong and what you've tried.

I was annoyed. I was on a clock. I typed something like "I think the useEffect isn't firing when the prop changes. I tried logging and the log doesn't show up."

The model said: okay, one clarifying question before I proceed. Is the prop coming from a parent that is itself re-rendering, or from a context, or from a ref?

And that is the moment the whole thing clicked, because the answer was "it's coming from a ref," and the instant I typed that sentence I knew what the bug was. Refs don't trigger re-renders. The model had not done anything. It had not read my code. It had asked me one question, and I had solved my own bug in the act of answering it.

I closed the tab without editing a single file. The bug was fixed in the time it took to type two sentences at a prompt that refused to help me.

The pattern it unlocks

There is a well-known technique called rubber duck debugging, where you explain your problem to an inanimate object and, halfway through the explanation, you solve it yourself. Every developer has done this. Most of us have actual ducks on our desks. The technique works because the bottleneck in debugging is almost never information, it is articulation. The bug is already in your head. You just haven't said it out loud in the right order yet.

The problem with AI coding tools, all of them, is that they collapse this step. The model is fast. It is eager. It is ready to start rewriting your file the instant you hint that something is wrong. And because it is fast and eager, you never get to the part where you articulate the problem, because the model starts solving it before you have finished thinking about it. Which means that half the time the model is solving the wrong problem, confidently and quickly, and you are sitting there watching your working code get replaced with code that also doesn't work, but in a new and exciting way that will take you an hour to unravel.

This prompt short-circuits that failure mode by refusing to let the model be fast. It forces the articulation step back into the loop. It makes the model behave like a rubber duck that can, if necessary, also write code, but only after you have rubber-ducked.

The side effect, which I did not expect, is that I now solve a meaningful percentage of my own bugs before the model ever touches anything. Not because the model is useless. Because the act of being forced to describe the bug in prose is, by itself, usually enough. The model becomes a constraint on me, not a tool for me, and it turns out that for debugging specifically, a constraint on me is more valuable than any tool I have ever used.

Why this is underrated

Most prompts in most catalogs are about getting the model to do more. Do more research. Do more steps. Do more autonomous work. This prompt is about getting the model to do less. It is a negative prompt in the literal sense: it removes a capability on purpose, because the capability was causing harm.

Developers who have been burned by AI coding, and I mean really burned, the kind where you lose an afternoon because the model ripped out your working auth layer to "fix" a typo, tend to get this instantly. Developers who have not been burned yet think it sounds pointless. The difference is a matter of time, not opinion. Eventually everyone gets burned, and eventually everyone wants a prompt that slows the model down when the stakes are high.

The catalog has fancier things. There are benches wired up with context7 for live docs, ref-tools-mcp for symbol lookup, puppeteer for browser work, supabase and neon and convex for database introspection, browserstack for cross-browser, mux for video, keboola for data. They are all great. I use most of them weekly. But the prompt that refuses to refactor until I explain the bug is the one I would hand to someone on their first day of using AI to code, before any MCP server, before any bench, before any soul. It is the guardrail that makes the rest of the stack safe.

It is sitting in the a-gnt catalog, in the prompts section, waiting to annoy you into being a better debugger.

Share this post:

Ratings & Reviews

0.0

out of 5

0 ratings

No reviews yet. Be the first to share your experience.