Hacks: The Two-Word Addition That Makes Claude Stop Being a Coward
Claude hedges when you ask for opinions. Two specific words, added to any prompt, flip it from diplomatic waffle to committed take. Here's the phrase, why it works, and when not to use it.
Last week we needed to pick between two domain names for a small project. Both were available. Both were short. One had a hyphen and was the obvious meaning; the other was a clean made-up word that took two seconds longer to explain but looked better on a sticker. We asked Claude, sincerely, with no setup: which of these two domain names is better for my business — a-foo.com or foonix.com.
The answer came back in 412 words. It started with "Both options have their merits." It went on to lay out the considerations on each side. It used the phrase "ultimately depends on" twice. It mentioned brand recall, SEO, memorability, type-in traffic, and "your target audience's preferences." It closed by saying that the right choice depended on factors only we could weigh, and that we should consider running the names past five to ten people in our target market.
We did not have five to ten people. We had a coffee getting cold and a registrar tab open. We needed an answer.
This is the third entry in the Hacks series on a-gnt. The first two were the 12-minute cover letter that sounds like you and the four-sentence text that restarts a conversation. Same shape: one specific technique, copy-pasteable, sixty seconds, no theory unless the theory is fun. Today's hack is the smallest one yet. It is two words. You add them to the end of any prompt where you actually want a Claude with a spine, and the spine arrives.
The two words are: be specific.
Sometimes "and pick one." Sometimes "and commit." But be specific is the workhorse, and it is the one we want you to try first.
The before-and-after
Same domain question. Same Claude. Same context.
The original:
“Which of these two domain names is better for my business — a-foo.com or foonix.com.
The fix:
“Which of these two domain names is better for my business — a-foo.com or foonix.com. Be specific about which one you'd pick and why.
The answer this time was 90 words. It opened with "foonix.com." It then gave three reasons — the cleanness of the name, the fact that the hyphen would get dropped in word-of-mouth and end up at the wrong URL, and that "made-up but pronounceable" was historically a stronger brand pattern than "literal-with-punctuation." It ended with one sentence noting that if SEO from exact-match terms mattered more than branding, the calculus flipped, but that the model's recommendation was foonix.com.
That is the answer we needed. We bought foonix.com (not really; the names are made up to protect a side project). The whole exchange took eleven seconds. The earlier 412-word answer took us four seconds to read and zero seconds to act on, because there was nothing in it to act on.
The two words did not change the data Claude had. It did not change the model. It did not unlock a hidden mode. It changed the register of the response, and the register is almost always the part of the answer you actually need.
Why it works
Modern AI assistants — Claude very much included — are trained, post-pretraining, to be helpful, harmless, and honest, in roughly that order, with a heavy weighting on not getting people upset. The training process for that polish involves enormous amounts of human feedback, and the human feedback tends to reward certain shapes of answer. Hedging is one of those shapes. "It depends" is, statistically, very rarely wrong, and very rarely makes a user complain. So the training pulls the model toward hedging on questions where committing might offend someone or might be revealed to be wrong.
Most of the time this is good. Epistemic humility is a real virtue and we want AI to have it. The problem is that the model cannot tell — without a signal from you — whether you are asking a question where humility is the right move ("which of these two paintings is more beautiful") or a question where humility is just cowardice in a tasteful sweater ("which of these two domain names is better for my business and I have to buy one in the next ten minutes").
The phrase be specific is the signal. It tells the model: I am giving you permission to commit. I am not going to be hurt if you have a take. I am not going to come back tomorrow and tell you you were wrong. I am asking, on purpose, for the version of you that has an opinion. The hedging is what I am opting out of. Please give me the rest.
There is no magic. The phrase doesn't bypass anything. It just moves the model out of the local minimum where "I'll list considerations" is the safest possible move and into the slightly riskier territory where "I'd pick X, here's why" lives. That territory is where most of the actually useful answers are.
It's a small flag. It works every time we use it.
When to use it
Anywhere you are asking for an opinion, a recommendation, a judgment call, or a direct comparison and the answer matters in the next twenty minutes. Some examples that cover the territory:
- "I have these two photos for the homepage hero. Which one is stronger? Be specific about which one you'd pick and why."
- "Here are three opening lines for my essay. Which one is best? Be specific."
- "I'm trying to decide whether to refactor this function or leave it. Be specific — what would you do?"
- "Read this email. Should I send it or rewrite it? Be specific."
- "Of these five candidates' resumes, who's the strongest fit for the role? Be specific about who and why."
- "Here are two product names. Which is better? Be specific."
Notice the pattern: the question is closed (there is a finite set of answers), the stakes are real but not catastrophic, and you already know the model could go either way. The phrase tells the model to go one way and explain itself.
When not to use it
Two important cases.
Genuinely contested ethical questions. If you ask a model "is the death penalty wrong, be specific," the model is going to commit to a position on a question where commitment is itself the wrong move, because the model has no business adjudicating ethics for you. Hedging on those questions is correct, and you should leave it alone. The same goes for politically loaded questions, medical-decision questions where the right answer depends on facts only your doctor knows, and any question where your own values are the load-bearing input. The hedging is doing real work. Do not bypass it.
Empirical questions where the model genuinely doesn't know. If you ask "which of these two startups is going to be worth more in 2030, be specific," the model is going to make up an answer with confidence, and the confidence will not match the underlying uncertainty. This is the failure mode the hack creates: it can promote a guess to the rank of an opinion, and an opinion sounds like a fact when it's wearing the right clothes. Use be specific when you want a judgment on something the model can reason about, not when you want a prediction about something nobody can.
The line is roughly: would a smart friend with relevant taste have a useful opinion on this? If yes, the hack is for you. If no, the hedging is correct and you should listen to it.
Other unlock phrases that work in the same register
The hack is not just be specific. It's a register — a small instruction that moves the model out of diplomat mode and into committed-friend mode. Here are four other phrases that do the same thing in slightly different ways. They are interchangeable; we use whichever one fits the question.
- "Pick one." The bluntest version. Works best for true two- or three-way comparisons. ("Pick one: red or blue for the button.")
- "No caveats." Tells the model to skip the "however, depending on…" sentences entirely. ("Is this draft ready to send? No caveats.")
- "What would you actually do?" Pulls the model toward something that sounds more like advice from a person than analysis from a service. ("I'm stuck between these two job offers. What would you actually do?")
- "Commit to a take." The version we use when we want a position we can argue with. ("Read this pitch deck and commit to a take — would you fund it?")
All four work. None of them are magic. All of them are tiny verbal nudges that tell the model the social register has shifted from "produce a balanced answer that will not offend anyone" to "produce a useful answer that will help me move." Most of the answers you actually need from Claude are in the second register, and the model will not get there on its own.
The thirty-second test
Open a new chat with Claude right now. Pick a real question you have been waffling on for more than a day — a small one is fine. Two paint colors. Two restaurant choices for tomorrow. Two opening sentences for an email. Ask the question once, plain. Read the answer.
Then ask it again, with be specific at the end.
You will see the difference immediately. The second answer will be shorter, more committed, and almost always the one you needed. It might even be the one you'd already half-decided on, which is its own kind of useful — it turns out a committed Claude is also pretty good at telling you what you already think, which is the rarest and most valuable thing a friend can do at 9:14 on a Thursday night.
That's the hack. Two words. Sixty seconds. We use it ten times a day.
This is the third entry in the Hacks series on a-gnt. More to come.
